Donation Match for ACE Movement Grants 2020-10-27T13:28:37.310Z · score: 5 (3 votes)
EricHerboso's Shortform 2020-09-01T06:53:42.014Z · score: 5 (1 votes)
Effective Advertising and Animal Charity Evaluators 2018-06-13T19:43:07.214Z · score: 19 (18 votes)
Animal Charity Evaluators Introduces the Recommended Charity Quiz 2018-03-15T13:46:41.098Z · score: 6 (5 votes)
[LINK] AMA by Animal Charity Evaluators on Reddit 2017-11-30T16:45:53.521Z · score: 6 (5 votes)
A Defense of Normality 2015-09-07T21:00:29.231Z · score: 25 (31 votes)


Comment by ericherboso on The Vegan Value Asymmetry and its Consequences · 2020-10-26T04:28:00.516Z · score: 1 (1 votes) · EA · GW

No, the OP's argument is assuming that the lives of farmed animals is net negative. It's saying that farmed animal welfare might at most be neutral, which would mean that, on expectation, farmed animal welfare is harmful. Nevertheless, it would be less harmful than ignoring farmed animal welfare would be, which means farmed animal welfare is still net positive.

Meanwhile, the argument in your link argues that farmed animal welfare may be net negative, but it relies on the opposite assumption that the lives of farmed animals may be net positive.

Comment by ericherboso on Announcing our summer 2020 ACE Movement Grants · 2020-10-15T14:05:24.737Z · score: 3 (2 votes) · EA · GW

Before answering your question, it may help to get a little more context about why ACE Movement Grants exist, and how they differ from the charity evaluation work that ACE does. This is important because you may be overestimating the relative importance of individual ACE Movement Grants compared to ACE's Recommended Charity Fund.

ACE’s overall goal is to find and support the most effective approaches to animal advocacy. The main way ACE accomplishes this is through ACE’s top and standout charities, which receive the highest (by far) influenced proportion of funds from ACE.

ACE Movement Grants complement this program by helping to foster a broad, pluralistic animal advocacy movement, which makes it more likely to be more resilient than a narrow, monistic animal movement. The amount of funding received through ACE Movement Grants are significantly lower than the funds influenced through the Recommended Charity Fund.

Keep in mind that ACE is in a different position than other charity evaluators in the EA community, such as GiveWell. The available evidence supporting any given intervention is limited in animal advocacy. Of the funding that currently goes toward addressing industrial agriculture, ACE and related EA charities influence a majority of those funds. Imagine if GiveWell had much less evidence for their preferred interventions, yet they directed half of all funding to the entire cause of global poverty. In such a world, should GiveWell continue to only fund a handful of charities, or would they be incentivized to direct a much smaller amount of funds to help diversify the movement? The ACE Movement Grants program allows us to consider the effectiveness of the movement as a whole instead of being limited to supporting only a small number of approaches.

Encompass in particular helps to build relationships with a larger group of advocates, specifically people of the global majority. Their grant goes toward fostering racial equity in animal advocacy organizations, which has the potential to make animal advocacy organizations overall more effective in the long term. To learn more about Encompass, you can read ACE's 2017 interview with Aryenish Birdie, ACE's reasoning on why promoting REI might be effective, or the Encompass FAQ.

For more info on ACE Movement Grants generally, see the ACE Movement Grants page or any of ACE's blog entries on the topic.

Comment by ericherboso on Charity Navigator acquired ImpactMatters and is starting to mention "cost-effectiveness" as important · 2020-10-15T04:17:33.397Z · score: 43 (19 votes) · EA · GW

In July, Charity Navigator announced their new nonprofit rating system that they call Encompass. This system looks at four “beacons” to determine their rating of each charity. One of these beacons is Impact & Results. At the time, they did not specify how they would evaluate this beacon. Yesterday's latest post from them finally sets down the initial methodology they will use.

Some basic takeaways:

  • At the same time as releasing their new rating system, they intend to increase the number of charities they rate from 9,000 to 160,000. Clearly, most of the ratings must be automated to do this, so only a vanishingly small proportion of charities will be evaluated on the basis of Impact & Results. It's not clear how they will prioritize which charities get rated in this beacon in the future, but they're starting with this list of cause areas and they have a sign-up form for charities that want to be evaluated on Impact & Results.
  • They will not be comparing causes nor sometimes even intervention types. Their system will only look at cost-effectiveness within a cause area in some cases, and only within a single intervention type for other cases. For example, they may rate the most highly cost-effective charity that provides emergency shelters for the homeless population. There will be no indication whatsoever that a cataract surgery charity scoring 100 points on Impact & Results might be more effective than a Veterans Disability Benefits charity scoring 100 points on Impact & Results.
  • Within each cause area, they give four possible scores for Impact & Results: 0 points for charities without publicly available data, 50 points if they provide data but are determined to be inefficient, 75 points if they are found to be effective, and 100 points if they are found to be highly effective. Assuming they successfully find the most highly effective charities, this would give the likely incorrect appearance that the most highly effective charities are only 1/3 better than charities who just barely do better than breaking even. It's also not clear what percentage of charities within a cause area may be simultaneously rated at 100 points in the Impact & Results beacon.
  • Within each cause area, they use vastly simplified calculations to determine impact. For example, when it comes to emergency shelters, they assume that all beds are equally good; they disregard counterfactual beds that would be available if the charity not existed there; they give full marks for providing a bed, even if other beds were available at the time; and when determining costs, if the charity doesn't specify what it costs for them to provide the cost of a bed, they instead just use the average cost as reported by HUD’s Housing Inventory Count dataset. While I believe these are humongous assumptions to be making, I don't necessarily think these simplifications are bad considering their goal; if they're serious about analyzing hundreds of thousands of charities, then they have to make simplifications somewhere. [EDIT 16 Oct: Elijah Goldberg of ImpactMatters clarifies in a comment below that this bullet point may be misleading.]

They have noted that they are looking into additional alternative methodologies for the future.

The system that Charity Navigator is using for its Impact & Results beacon was acquired from ImpactMatters, which was previously discussed on the EA Forum.

Comment by ericherboso on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-26T14:54:13.091Z · score: 7 (8 votes) · EA · GW
It is a (perhaps unfortunate) fact that many true conclusions alienate a lot of people. And it is much more important that we are able to identify those conclusions than that we find more people to join our ranks, or that our ranks are more ethnically / culturally / etc. diverse.

We are agreed that truth is of paramount importance here. If a true conclusion alienates someone, I endorse not letting that alienation sway us. But I think we disagree on two points:

  1. I believe diversity is a serious benefit. Not just in terms of movement building, but in terms of arriving at truth. Homogeneity breeds blind spots in our thinking. If a supposed truth is arrived at, but only one group recognizes it as truth, doesn’t that make us suspect whether we are correct? To me, good truth-seeking almost requires diversity in several different forms. Not just philosophical diversity, but diversity in how we’ve come up in the world, in how we’ve experienced things. Specifically including BIPGM seems to me to very important in ensuring that we arrive at true conclusions.
  2. I believe the methods of how we arrive at true conclusions doesn’t need to be Alastair Moody-levels of constant vigilance. We don’t have to rigidly enforce norms of full open debate all the time.

I think the latter disagreement we have is pretty strong, given your willingness to bite the bullet on holocaust denial. Sure, we never know anything for sure, but when you get to a certain point, I feel like it’s okay to restrict debate on a topic to specialized places. I want to say something like “we have enough evidence that racism is real that we don’t need to discuss it here; if you want to debate that, go to this other space”, and I want to say it because discussing racism as though it doesn’t exist causes a level of harm that may rise to the equivalent to physical harm in some people. I’m not saying we have to coddle anyone, but if we can reduce that harm for almost no cost, I’m willing to. To me, restricting debate in a limited way on a specific Facebook thread is almost no cost. We already restrict debate in other, similar ways: no name calling, no doxxing, no brigading. In the EAA FB group, we take as a given that animals are harmed and we should help them. We restrict debate on that there because it’s inappropriate to debate that point there. That doesn’t mean it can’t be debated elsewhere. To me, restricting the denial of racism (or the denial of genocide) is just an additional rule of this type. It doesn’t mean it can’t be discussed elsewhere. It just isn’t appropriate there.

In what ways do people not feel safe? (Is it things like this comment?) … I want to know more about this. What kind of harm?

No, it’s not things like this comment. We are in a forum where discussing this kind of thing is expected and appropriate.

I don’t feel like I should say anything that might inadvertently out some of the people that I have seen in private groups talking about these harms. Many of these EAs are not willing to speak out about this issue because they fear being berated for having these feelings. It’s not exactly what you’re asking for, but a few such people are already public about the effects from those harms. Maybe their words will help:

“[T]aking action to eliminate racism is critical for improving the world, regardless of the ramifications for animal advocacy. But if the EA and animal advocacy communities fail to stand for (and not simply passively against) antiracism, we will also lose valuable perspectives that can only come from having different lived experiences—not just the perspectives of people of the global majority who are excluded, but the perspective of any talented person who wants to accomplish good for animals without supporting racist systems.
I know this is true because I have almost walked away from these communities myself, disquieted by the attitudes toward racism I found within them.”
Comment by ericherboso on EricHerboso's Shortform · 2020-09-26T14:53:19.378Z · score: -2 (4 votes) · EA · GW
If there are Facebook threads that drain your ability to focus for hours, it seems pretty reasonable for that person to avoid such facebook threads. ... [It]seems way better to have that responsibility be on the individual.

We agree here that if something is bad for you, you can just not go into the place where that thing is. But I think this is argument in favor of my position: that there should be EA spaces where people like that can go and discuss EA-related stuff.

For example, some people have to go to the EAA facebook thread as a part of their job. They are there to talk about animal stuff. So when people come into a thread about how to be antiracist while helping animals and decide to argue vociferously that racism doesn't exist, that is just needlessly inappropriate. It's not that the issue shouldn't ever be discussed; it's that the issue shouldn't be discussed there, in that thread.

We should allow people to be able to work on EA stuff without having to be around the kind of stuff that is bad for them. If they feel unable to discuss certain topics without feeling badly, let them not go into threads on the EA forum that discuss those topics. This we agree on. But then why say that we can't have a lesser EA space (like an EA facebook group) for them where they can interact without discussion on the topics that make them feel badly? Remember, some of these people are employees whose very job description may require them to be active on the EAA facebook group. They don't have a choice here; we do.

Comment by ericherboso on Examples of loss of jobs due to Covid in EA · 2020-09-08T23:21:10.073Z · score: 5 (3 votes) · EA · GW

Animal Charity Evaluators suspended their paid internship program for the second half of 2020, but plans to resume it in early 2021. This didn't result in anyone losing a job; rather, it meant that temporary intern positions were not filled that otherwise likely would have been, had COVID-19 not happened. There are more details about this in ACE's Room for More Funding blogpost.

Comment by ericherboso on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-08T20:32:29.532Z · score: 2 (4 votes) · EA · GW

If you’re correct that the harms that come from open debate are only minor harms, then I think I’d agree with most of what you’ve said here (excepting your final paragraph). But the position of bipgms I’ve spoken to is that allowing some types of debate really does do serious harm, and from watching them talk about and experience it, I believe them. My initial intuition was closer to your point of view — it’s just so hard to imagine how open debate on an issue could cause such harm — but, in watching how they deal with some of these issues, I cannot deny that the harm from something like a casual denial of systemic racism caused them significant harm.

On a different point, I think I disagree with your final paragraph’s premise. To me, having different moderation rules is a matter of appropriateness, not a fundamental difference. I think that it would not be difficult to say to new EAs that “moderation in one space has different appropriateness rules than in some other space” without hiding the true nature of EA and/or being dishonest about it. This is relevant because one of the main EA Facebook groups is currently deciding how to implement moderation rules with regard to this stuff right now.

Comment by ericherboso on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-08T08:05:48.369Z · score: 4 (7 votes) · EA · GW

Surely there exists a line at which we agree on principle. Imagine that, for example, our EA spaces were littered with people making cogent arguments that steel manned holocaust denial, and we were approached by a group of Jewish people saying “We want to become effective altruists because we believe in the stated ideals, but we don’t feel safe participating in a space where so many people commonly and openly argue that the holocaust did not happen.”

In this scenario, I hope that we’d both agree that it would be appropriate for us to tell our fellow EAs to cut it out. While it may be a useful thing to discuss (if only to show how absurd it is), we can (I argue) push future discussion of it into a smaller space so that the general EA space doesn’t have to be peppered with such arguments. This is the case even if none of the EAs talking about it actually believe it. Even if they are just steel-manning devil’s advocates, surely it is more effective for us to clean the space up so that our Jewish EA friends feel safe to come here and interact with us, at the cost of moving specific types of discussion to a smaller area.

I agree that one of the things that makes EA great is the quality of its epistemic discourse. I don’t want my words here to be construed that I think we should lower it unthinkingly. But I do think that a counterbalancing force does exist: being so open to discussion of any kind that we completely alienate a section of people who otherwise would be participating in this space.

I strongly believe that representation, equity, and inclusiveness is important in the EA movement. I believe it so strongly that I try to look at what people are saying in the safe spaces where they feel comfortable talking about EA norms that scare them away. I will report here that a large number of people I see talking in private Facebook groups, on private slack channels, in PMs, emails, and even phone calls behind closed doors are continuously saying that they do not feel safe in EA spaces. I am not merely saying that they are “worried” about where EA is heading; I’m saying that right here, right now, they feel uncomfortable fully participating in generalized EA spaces.

You say that “If people wouldn't like the discourse norms in the central EA spaces…I would prefer that they bounce off.” In principle, I think we agree on this. Casual demands that we are being alienating should not faze us. But there does exist a point at which I think we might agree that those demands are sufficiently strong, like the holocaust denial example. The question, then, is not one of kind, but of degree. The question turns on whether the harm that is caused by certain forms of speech outweighs the benefits accrued by discussing those things.

  • Q1: Do you agree that this is a question of degree, not kind? If not, then the rest of this comment doesn't really apply.
  • Q2: You mentioned having similar standards to academia. If it became standard for undergraduate colleges to disallow certain forms of racist speech to protect students, would you be okay with copying those norms over to EA? Or do you mean only having similar standards to what academics discuss amongst each other, setting aside completely how universities deal with undergraduate students' spaces.

I have significant cognitive dissonance here. I’m not at all certain about what I personally feel. But I do want to report that there are large numbers of people, in several disparate places, many of which I doubt interact between themselves in any significant way, who all keep saying in private that they do not feel safe here. I have seen people actively go through harm from EAs casually making the case for systemic racism not being real and I can report that it is not a minor harm.

I’m extremely privileged, so it’s hard for me to empathize here. I cannot imagine being harmed by mere speech in this way. But I can report from direct experience watching private Facebook chats and slack threads of EAs who aren’t willing to publicly talk about this stuff that these speech acts are causing real harm.

Is the harm small enough to warrant just having these potential EAs bounce off? Or would we benefit from pushing such speech acts to smaller portions of EA so that newer, more diverse EAs can come in and contribute to our movement? I hope that you'll agree that these are questions of degree, not of kind. After seeing the level of harm that these kinds of speech acts cause, I think my position of moving that discourse away from introductory spaces is warranted. But I also strongly agree with traditional enlightenment ideals of open discussion, free speech, and that the best way to show an idea is wrong is to seriously discuss it. So I definitely don’t want to ban such speech everywhere. I just want there to be some way for us to have good epistemic standards and also benefit from EAs who don’t feel safe in the main EA Facebook groups.

To borrow a phrase from Nora Caplan-Bricker, they’re not demanding that EA spaces be happy places where they never have to read another word of dissent. Instead, they’re asking for a level of acceptance and ownership that other EAs already have. They just want to feel safe.

Comment by ericherboso on EricHerboso's Shortform · 2020-09-05T10:43:43.959Z · score: -7 (12 votes) · EA · GW
I would say this is the first time I've come across the idea that someone who (hypothetically) correctly says that systemic racism doesn't exist would then correctly be labelled as a racist.

Perhaps we travel in different circles, but this has been how many (but not all) activists and academics have been talking for a few years now. Though this redefinition started in critical race theory (which I am not personally a fan of, as it isn’t really compatible with classical liberalism), it quickly spread to other, more mainstream thinkers. James Baldwin uses the term “white supremacy” this way in his 1980 essay Dark Days and Martin Luther King, Jr., used the term this way in his 1967 book Where Do We Go from Here: Chaos or Community?. Both Baldwin and King didn’t use the newer definitions of these terms earlier in their writings; King, for example, famously refused to call Barry Goldwater ‘racist’ as late as 1964. Today, I would guess that a majority (but maybe not a clear majority) of people working on race stuff in academia accept the newer definitions.

However, I think you’re correct that here on the EA forum, using the terms in this way puts me clearly in the minority.

I can't help but see this new definition of racism as scope creep, and very harmful scope creep at that.

It is indeed scope creep, but I don’t think it’s as harmful as you say.

Consider global climate change, for example. There was a time (in the US, at least) when news organizations would present “both sides of the issue”, giving similar respect and airtime to climate change deniers. Over time, it became clear that acting as though both sides had equal evidence was disingenuous at best and likely constituted harm done by news organizations. This was and is true even if global climate change isn’t real. It is entirely appropriate for climate scientists to debate the issue; it is not appropriate for news organizations to give equal time to each side, regardless of what is actually true in reality.

The appropriateness of what was considered to be “both sides” by journalists has had scope creep in the past few decades, and I think this is to all of our benefit. I believe the same is true with the scope creep of this new definition of “racism”.

Ultimately, what matters is what we should presume to be true when we are in a space like the Facebook thread that was linked. When the preponderance of the evidence is clear that systemic racism exists, when the presentation of arguments denying its existence results in actual harm, and when that presentation is happening in a space where debating it doesn’t progress society in any way, then I think that the scope creep of being able to call the thread racist is entirely appropriate.

This remains true even if systemic racism actually doesn’t exist in just the same way that it is inappropriate for journalists to give equal time to climate deniers even if global climate change isn’t actually real. What matters here is what we should judge as the default expectation of truth, not what actual truth is, because we are not talking about a forum of experts talking in their field; we’re talking about a public forum where some commenters are saying things that actively harm other commenters. It’s important for climate scientists and race theorists to be able to discuss these issues without being silenced, but it’s not important to give equal time to climate deniers on a news program, nor to allow systemic-racism-deniers to say things that actively harm others in a Facebook thread.

I am open to the possibility that systemic racism isn’t real. I am open to reasoned debate about this among academics and in lay spaces that are set out to openly discuss these types of contentious issues, like on this EA forum. But I think it is entirely reasonable to ban nonconstructive discussion about this in an introductory space like the Effective Animal Advocacy Facebook group, on a thread where the original poster was literally trying to organize activists who wanted to discuss systemic racism in the context of animal advocacy. It is, as I said before, like wearing a white dress at your friend’s wedding.

Can you provide references that back up your claim that your definition of racism is widely accepted?

Wei Dai's links may not be positive toward the redefinition, but hopefully they do show that the new definition has gained a lot of traction recently since its inception in the late 1960s. Anecdotally, most of the activist groups I'm aware of use these new definitions nearly exclusively.

Comment by ericherboso on EricHerboso's Shortform · 2020-09-05T01:39:07.933Z · score: -15 (13 votes) · EA · GW

It seems this might be a semantic misunderstanding. A few decades ago, a "racist" was defined as someone who actively hurt others on the basis of race. A KKK member would be the canonical example. Being racist under that terminology was unambiguously bad.

But in today's literature, the way that the word "racist" is used refers to the types of actions being taken by an individual. Actions are either "racist" or "anti-racist". While some neutral actions do exist, almost every action having to do with furthering the status quo is considered "racist". (For the same reason that you can't be neutral on a moving train.)

Today, under the way that the term is used in the current literature, being racist isn't an indictment of bad character, but only a description of the types of actions being taken. At times, everyone is sometimes racist and sometimes antiracist. When I say that the thread was a racist thread, I'm not saying that certain people within are unambiguously bad. I'm saying that they were taking actions within which perpetuated harm toward a class of people on the basis of their race.

You said:

"All racists may deny systemic racism, but that doesn't mean that all those who deny systemic racism are racist."

But under the definition of "racist" that is used by academics today, this isn't true. People are indeed almost always being racist when they are denying systemic racism. Their actions are racist specifically because the act of denying systemic racism perpetuates systemic racism.

However, the flip side to this is that being racist doesn't necessarily mean you are doing bad. Under the old definition, we might say someone is a racist and then stop there, claiming they are in the wrong. But, under this definition, there has to be additional information to find out if someone is in the wrong -- being racist is not enough, because everyone is racist sometimes. For example, speaking and writing in English might, under some descriptions, be an example of furthering the status quo, and thereby supporting racism. But this doesn't mean that people are wrong to speak and write in English. Under the current accepted use of "racist", just because someone is being racist does not, by itself, indict them with wrongdoing.

So when I say that yes, you and other POC who share the views of those in that thread are being racist in doing so, I hope you'll understand that I'm not trying to name-call nor shame you in any way. It is, as you said, entirely possible that it is due to ignorance or misinformation, and it may even be that the truth of the matter is that there is indeed no systemic racism in today's society, but none of this changes the fact that, in saying these things, we are being racist. I do in fact think that there needs to be serious high level discussion about what is or is not true, and that applies even to whether systemic racism exists (thus even I am being racist here). But just like you shouldn't wear a white dress at your friend's wedding, it is completely inappropriate to loudly proclaim that systemic racism doesn't exist in the very thread where people are trying to get together and fight that very concept.

Comment by ericherboso on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-03T05:12:12.893Z · score: 6 (4 votes) · EA · GW

I don't think this is pivotal to anyone, but just because I'm curious:

If we knew for a fact that a slippery slope wouldn't occur, and the "safe space" was limited just to the EA Facebook group, and there was no risk of this EA forum ever becoming a "safe space", would you then be okay with this demarcation of disallowing some types of discussion on the EA Facebook group, but allowing that discussion on the EA forum? Or do you strongly feel that EA should not ever disallow these types of discussion, even on the EA Facebook group?

(by "disallowing discussion", I mean Hansonian level stuff, not obviously improper things like direct threats or doxxing)

Comment by ericherboso on Do research organisations make theory of change diagrams? Should they? · 2020-09-03T00:45:27.829Z · score: 3 (2 votes) · EA · GW

On Q1: You mention only being aware of a few research orgs that have public ToC diagrams. I wanted to bring your attention to Animal Charity Evaluators, which uses ToC diagrams as a way of better communicating with the public how ACE thinks that a given recommended organization might be causing one of its animal advocacy outcomes.

ACE also uses a ToC diagram in its strategic plan, but this might not be easily searchable because it exists publicly only on pdf documents. (The webpage hosting the strategic plan doesn't use the phrase "theory of change" at all, even though a ToC diagram does exist within the linked pdf there.)

Comment by ericherboso on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-03T00:29:52.939Z · score: 4 (6 votes) · EA · GW

I agree that that was definitely a step too far. But there are legitimate middle grounds that don't have slippery slopes.

For example, allowing introductory EA spaces like the EA Facebook group or local public EA group meetups to disallow certain forms of divisive speech, while continuing to encourage serious open discussion in more advanced EA spaces, like on this EA forum.

I refuse to defend something as ridiculous as the idea of cancel culture writ large. But I sincerely worry about the lack of racial representativeness, equity, and inclusiveness in the EA movement, and there needs to be some sort of way that we can encourage more people to join the movement without them feeling like they are not in a safe space.

Comment by ericherboso on EricHerboso's Shortform · 2020-09-02T21:30:18.042Z · score: 18 (7 votes) · EA · GW

No need to apologize. It's just a shortform, and I have enough cognitive dissonance on the topic to not be really sure what I think about it myself.

I agree with you that the phrase "people of the global majority" sounds weird and naively seems to divide people into unintuitive groups unnecessarily. But in my post I was talking about friends that I personally know who have been hurt by things the EA movement has said in some introductory social media spaces, and their preferred name as a group is "people of the global majority". By using it, I'm merely using the term they've taken for themselves, because it doesn't seem to hurt anything and it generally is nice to use the names that people have adopted for themselves.

Their reasoning for using "people of the global majority" is that:

  • "people of color" is too US-centric;
  • it centers whiteness as the norm;
  • it implies that white folks are devoid of race; &
  • many people may not identify as POC as it’s a U.S. social and cultural construct that does not translate universally.

I believe that they find it empowering to identify as a larger group. Personally, I've always felt more empowered by being considered part of a smaller group, but I believe they would say that that is a another example of my privilege. Since it does no obvious harm to call them what they want to call themselves, that's what I do.

You also asked "how is anyone supposed to know that's what you've decided to do, when first encountering this term?" To that, I don't really have a good answer. I found the term BIPGM (black, indigenous, or person of the global majority) to be really unusual when I first came across it. It seemed to me to be another example of this new generation naming things unnecessarily differently. But it's not any worse than what LessWrong does, where decades-old concepts are re-named in favor of whatever Yudkowsky titled it in the sequences. As weird as it may seem on first hearing the term PGM, I don't see why it shouldn't be used when talking about specific people who themselves prefer that term to be used.

Comment by ericherboso on EricHerboso's Shortform · 2020-09-02T03:50:16.369Z · score: 5 (3 votes) · EA · GW

No, I don't think the discussion on the Hanson thread in this forum involved casual bigotry. In general, I think discussion here on the EA forum tends to be more acceptable than in other EA social media spaces. (Maybe this is because nested threads are supported here, or maybe it's because people consider this space more prestigious and so act more respectfully.) But much of the discussion of the Hanson incident on Twitter would certainly qualify as casual bigotry, and I've witnessed a few threads on EA Facebook groups that also involved clear casual bigotry.

I should stress here that I feel very conflicted about the status of this EA forum as it relates to new EAs. On the one hand, we clearly use this as a place for new EAs to discuss things. But I almost want to say that this is somehow a more formal space than the EA Facebook group, so I'd be far more comfortable with discussing divisive issues here than on Facebook. I said in my original post that we can have a less safe pull communication space where open debate and serious steel-manning of controversial ideas occur -- I think maybe the EA forum would be a good place for that to happen. But I'm not sure of this. And this whole line of thinking is something I'm still feeling conflicted about in the first place, so I don't know how seriously others should take what I'm saying here at all.

Comment by ericherboso on What is the financial size of the Effective Altruism movement? · 2020-09-01T08:34:55.711Z · score: 2 (2 votes) · EA · GW

Animal Charity Evaluators lists its influenced donations in 2019 as $8.9 million. You can see a breakdown of this in ACE's 2019 Giving Metrics Report, which shows that only $7.1 million of this went toward the top recommended charities. I would put $7.1 million as the upper bound on EA-sourced money that was influenced by ACE toward direct animal charities. But ACE also took in $1.0 million in donations as operating expenses, the majority of which came from EA sources. So, in total, I'd give $8.1 million as an upper bound of funding from EA sources that filtered through ACE.

Keep in mind that if you get numbers from other EA orgs that deal with animal org funding, you might not be able to add their numbers to ACE's numbers, because you might end up double counting. For example, ACE received a grant of $325k from Open Philanthropy in 2019, so that amount would be double counted if you included both that grant and the operating expenses of ACE additively.

Comment by ericherboso on EricHerboso's Shortform · 2020-09-01T06:53:42.440Z · score: 12 (22 votes) · EA · GW

I was going to write a comment on the EA Munich/Hanson incident post, but I realized that what I really wanted to say was a bit more general and had a different intended audience: those people in EA who (1) believe banning topics is wrong in general, (2) think that people are not really all that harmed by open discussion of certain topics, and (3) don’t understand why a fellow effective altruist of all people would ever try to ‘cancel’ another EA just because some of their discussed ideas are controversial. I don’t believe that all (or even all that many) people in this forum correspond to all (1), (2), and (3). But I think I may have felt that way at one point in the past, and I wanted to explain to that group why I no longer solely feel that way. (I currently have cognitive dissonance with regard to this issue.)

(I say the following because it is how I feel. I’m not speaking on behalf of anyone else nor any organization I'm with.)

Some speech is harmful. Even speech that seems relatively harmless to you might be horribly upsetting for others. I know this firsthand because I’ve seen it myself.

I’m extremely privileged. I’m white, male, CIS, well-educated, and I don’t have to work to earn a living. Sure, I have a few non-privileged parts of me (hispanic, asexual, polyamorous), but ultimately I find that speech is rarely harmful to me. So it comes as no surprise that I have historically found no issue with the ideals of the enlightenment: open discussion of ideas, free speech, believing ideas based on argumentation and evidence. Yet we all know that not all speech is harmless. You can’t shout ‘fire’ in a crowded theater, nor command another to cause direct harm.

Racist speech might not affect me very much, but it can really and truly hurt people of the global majority that have to deal with it constantly. I have friends who I have watched first hand having to read through a racist Facebook thread who were subsequently unable to focus for hours afterward. The exhaustion from having to deal with even just questions about the legitimacy of systemic racism was equivalent to what I would feel if I had to dig a hole for two hours straight. It’s not because my friends are weak-willed, or that they cannot take criticism well. It’s because they live day-in and day-out in a system that actively oppresses their ability to succeed, and people who are supposedly their peers, people who have also decided to live their lives in the service of achieving effective altruism, people who otherwise claim to be dedicated to good — these people were just casually questioning whether blacks even had it that bad in today’s society.

I believe that if people were honestly trying to be rational, then open discussion and debate would eventually kill off racist memes in society. And since I’d like to believe that the EA community is trying to be both honest and rational, I naively thought that open discussion and debate in EA spaces, above all other spaces, would be the perfect way to deal with ordinarily divisive issues like the black lives matter movement. But I was wrong.

I know that I was wrong because people of the global majority continuously speak in safe spaces about they feel unsafe in EA spaces. They speak about how they feel harmed by the kinds of things discussed in EA spaces. And they speak about how there are some people — not everyone, but some people — who don’t seem to be just participating in open debate in order to get at the truth, but rather seem to be using the ideals of open discussion to be a cloak that can hide their intent to do harm.

We in the EA community need to figure out how to do better. We need a diverse set of people to at least feel safe in our community. That doesn’t mean we should quash discussion on weird issues. We need that. But we don’t need it in the places where everyone keeps doing it. It doesn’t need to be at local public EA events. It doesn’t need to be in the main Facebook group chat. It doesn’t need to be anywhere near places that are primarily or secondarily used as public ways to encourage people new to EA.

In the field of communications, a distinction is made between interactive, push, and pull communication. Push communication is exemplified by email; it’s stuff you send out to your audience. You generally don’t want the weird unattractive stuff to be in your push communications. That doesn’t mean it shouldn’t exist. To the contrary, I want there to be a space for people to legitimately debate in open, non-private discussion about even topics with Hansonian-levels of weirdness. But it needs to not be in our push or interactive-online communications. I want my friends to feel safe in public introductory EA spaces. Let diverse representative people join the movement first, and then let them choose of their own accord where they want to put in their efforts. Just as we let individual people work unmolested on longtermism, or animal suffering, or even mental health, so too should we allow people to work in spaces safe from constant racist questioning, even while others volunteer to work in less safe spaces where open debate and serious steel-manning of controversial ideas occur.

I implore others to consider the harm caused when bigotry is so casually discussed on the EA Facebook group or in a local EA meetup. This is real harm. Not just PR harm. Real harm. Let’s move that kind of discussion to pull-communications-only spaces. They needn’t be private; they should definitely be public and open (though I’d warn about trolls masquerading as devil’s advocates there). But they have no place in the spaces that we use for attracting new talent. The EA movement is too white and male as it is; if we are to succeed in truly achieving effective altruism at scale, then we need representativeness, equity, and inclusion in the movement, and that means, at a minimum, introductory EA spaces must be free from casual bigotry.

Comment by ericherboso on Donor Lottery Debrief · 2020-08-05T17:29:24.980Z · score: 10 (4 votes) · EA · GW

Thank you for writing this up. I think it would be helpful if this post (and future debriefs like it) were linked to from the donor lottery page.

Comment by ericherboso on EA Forum feature suggestion thread · 2020-06-22T18:56:01.049Z · score: 5 (3 votes) · EA · GW

...and literally thirty seconds later, I appear to have found the bug report submission form is intended to be the Intercom on the side of every single page. I feel a little bit ashamed about this, but it just didn't occur to me that I should give bug reports there.

Comment by ericherboso on EA Forum feature suggestion thread · 2020-06-22T18:52:36.072Z · score: 1 (1 votes) · EA · GW

When performing a search, the search results page uses "LW Search - EA Forum" as the contents of the title tag. I doubt this is an intentional reference to this forum being a fork of the lesswrong forum, so I assume the "LW" part should be removed.

By the way, I looked for 60 seconds to find where to post this small bug report, but the only options I saw was the unlisted contact us page, which seems to send a message to content people rather than the people that work on the codebase of the forum. This page is the only place where I could quickly find a way to get a message to whomever does the technical side of the forum.

So I suppose my feature request is: Provide a new place for users to quickly submit bug reports; or, if such a place already exists, make it more prominent.

Comment by ericherboso on A list of EA-related podcasts · 2019-12-03T08:05:14.075Z · score: 15 (8 votes) · EA · GW

Luke Muehlhauser's Conversations from the Pale Blue Dot had an episode interviewing Toby Ord back in January 2011. This is from before the term "effective altruism" was being used to describe the movement. I think it may be the first podcast episode to really discuss what would eventually be called EA, with the second oldest podcast episode being Massimo Pigliucci's interview with Holden Karnofsky on Rationally Speaking in July 2011.

(There was plenty of discussion online about these issues in years prior to this, but as far as I can tell, discussion didn't appear in podcast form until 2011.)

Comment by ericherboso on The Importance of EA Dedication and Why it Should Be Encouraged · 2018-08-23T18:28:20.733Z · score: 0 (0 votes) · EA · GW

I believe there is a threshold difference between passionate and self-disciplined EAs. As excited EAs become more dedicated, they tend to hit a wall where their frugality starts to affect them personally much more than it previously have. This wall takes effort to overcome, if it is overcome at all.

Meanwhile, when an obligatory EA becomes more dedicated, that wall doesn't exist (or at least it has less force). So it's easier for self-disciplined EAs to get to more extreme levels than for passionate EAs.

Comment by ericherboso on Harvard EA's 2018–19 Vision · 2018-08-05T21:53:35.971Z · score: 3 (3 votes) · EA · GW

Please feel free to steal the html used for footnotes in EA forum posts like this one.

  • In-page anchor links: <a id="ref1" href="#fn1">&sup1;</a>
  • Linked footnote: <p id="fn1">&sup1; <small>Footnote text.</small></p>
  • Footnote link back to article text: <a href="#ref1">↩</a>
Comment by ericherboso on The EA Community and Long-Term Future Funds Lack Transparency and Accountability · 2018-07-31T11:16:00.777Z · score: 3 (3 votes) · EA · GW

This is now posted on the Animal Welfare Fund Payout Report page.

Comment by ericherboso on The EA Community and Long-Term Future Funds Lack Transparency and Accountability · 2018-07-31T11:13:02.486Z · score: 15 (13 votes) · EA · GW

While I personally have trust that Nick Beckstead has been acting in good faith, I also completely understand why donors might choose to stop donating because of this extreme lack of regular communication.

It's important for EAs to realize that even when you have good intentions and are making good choices about what to do, if you aren't effectively communicating your thinking to stakeholders, then you aren't doing all that you should be doing. Communications are vitally important, and I hope that comments like this one really help to drive this point home to not just EA Funds distributors, but also others in the EA community.

Comment by ericherboso on EA Hotel with free accommodation and board for two years · 2018-06-21T04:06:23.763Z · score: 5 (11 votes) · EA · GW

Not all EAs are on board with AI risk, but it would be rude for this EA hotel to commit to funding general AI research on the side. Whether all EAs are on board with effective animal advocacy isn't the key point when deciding whether the hotel's provided meals are vegan.

An EA who doesn't care about veganism will be mildly put off if the hotel doesn't serve meat. But an EA who believes that veganism is important would be very strongly put off if the hotel served meat. The relative difference in how disturbed the latter person would be is presumably at least 5 times as strong as the minor inconvenience that the former person would feel. This means that even if only 20% of EAs are vegan, the expected value from keeping meals vegan would beat out the convenience factor of including meat for nonvegans.

Comment by ericherboso on Effective Advertising and Animal Charity Evaluators · 2018-06-19T23:20:45.255Z · score: 1 (1 votes) · EA · GW

You raise a number of points; I’ll try to respond to each of them.

For people who are already donating to animal organisations which aren't shelters then it isn't necessarily better to give to "effective" organisations as put forward by ACE because there aren't sufficient comparisons that can be made between organisations they are already supporting.

We do not believe this is true. We explicitly rank our top charities as being better targets for effective giving than our standout charities, and we explicitly rank our standout charities as better targets than organizations not on our Recommended Charity list.

This doesn’t mean that more effective EAA charities necessarily don’t exist. We’re currently expanding our focus to several organizations across the world to which we hadn’t previously looked. (There's still time to submit charities for review in 2018.) There are also some charities that we were not able to evaluate last year for one reason or another. These charities may or may perform better than our current Top Charities. We encourage you to learn more about how we evaluate charities.

As an example, I continue to wonder why someone would necessarily believe it is better to give to GFI over an organisation doing pluralistic work in the animal movement? One is well supported by various foundations and is far from underconsidered or neglected, whilst others that work on more meta level questions of plurality and inclusivity tend to be marginalised, particularly through not reflecting a favoured "mainstream" ideology.

GFI rates well on all of our criteria. If you want to compare them to another group doing pluralistic work, then you’d need to directly compare our reviews of each organization. Alternatively, you are free to perform your own analysis to compare relative potential effectiveness; if performed well, such analyses could then be used in future reviews by ACE.

Keep in mind that we explicitly believe a pluralistic approach is best overall. It's just that individual charities working on pluralistic approaches may have wildly different levels of effectiveness, and, given limited resources, we should prioritize whatever results in the most good.

Another issue is that ACE doesn't account for moral theory in relation to rights or utilitarianism thus largely presenting a fairly unfortunate picture in the animal movement in terms of utilitarian = effective and rights = ineffective.

We are quite transparent about the philosophical foundations of our work. We explicitly maintain that the most effective approach is probably a pluralistic one, and we hope that a diverse group of animal charities will continue pursuing a wide range of interventions to help all populations of animals. However, we will continue to recommend that marginal resources support the most effective tactics.

This is not an issue of rights vs utility. Whether you believe in rights or in utility, presumably you would want to do twice as much good with limited resources if you get the chance.

(A quick aside on deontology vs consequentialism as it relates to cause prioritization: Let's say you're a deontologist who believes murder is wrong. You're given a coupon that you can redeem at one of two locations. If you redeem at the first, you prevent a murder. If you redeem at the second, you prevent two murders. Can you honestly say that, even as a deontologist, you wouldn't prefer to redeem at the second location?)

The suffering of all animals is important, whether those animals are companion animals, animals in a lab, animals used in entertainment, or farmed animals. But when you have limited resources, you should prioritize helping those animals for which you can effectively reduce suffering. This is true whether you're talking about a rights organization or a utilitarian organization (to use your terminology).

I support the idea of evaluation by ACE but i'm sceptical that the claims that ACE tend to make sufficiently reflect the work that has taken place, or that there is enough transparency in terms of the underlying values and beliefs that ACE tend to represent. I continue to believe that some form of external meta-evaluation would be useful for ACE.

If there are specific claims that you believe do not reflect the work that we do, you are always welcome to give feedback. We also strive to be as transparent as possible in everything that we do. With regard to outside evaluation, we have explicitly asked for external reviewers and have a public list of external reviewers on our site.

I hope that these responses help to alleviate some of your concerns.

Comment by ericherboso on Effective Advertising and Animal Charity Evaluators · 2018-06-19T22:26:19.776Z · score: 1 (1 votes) · EA · GW

sidenote: I’d be interested to what extent ACE now uses Bayesian reasoning in their estimates, e.g. by adjusting impact by how likely small sample studies are false positives.

Our current methodology uses an alternative approach of treating cost-effectiveness estimates as only one input into our decisions. We then take care to "notice when we are confused" by remaining aware that if a cost-effectiveness estimate is much higher than we would expect based on the other things we know about an intervention or charity, that may be due to an error in our estimate rather than to truly exceptional cost effectiveness.

We admit that Bayesian techniques would more accurately adjust for uncertainty, but this would require additional work in developing appropriate priors for each reference class, and this process may not generate worthwhile differences in our evaluations, given our data set. See this section of our Cost-Effectiveness Estimates page for details on our thinking about this.

Comment by ericherboso on Effective Advertising and Animal Charity Evaluators · 2018-06-19T21:59:52.264Z · score: 1 (1 votes) · EA · GW

We had on the order of hundreds of new donors during our 2017 matching campaign, making up 56% of the pre-matched amount raised. A very large portion of these donors are new to effective giving, as most come from the AR space.

We track donor engagement with EAA directly through retention and surveys, and we have limited indirect tracking of engagement with EA more generally. (Concerns about privacy (and GDPR) prevent us from tracking more deeply, such as through social media engagement.)

We also actively advocate EAA and EA ideas to these donors via email and other messaging.

Comment by ericherboso on Effective Advertising and Animal Charity Evaluators · 2018-06-19T20:02:46.793Z · score: 2 (2 votes) · EA · GW

This donor is a major general animal welfare donor, and had the ~$600k they gave to the Recommended Charity Fund not occurred, they likely would have given it to other non-EAA animal charities, or they may have just left the money in their foundation for future donations.

While they do support some of our Top Charities and Standout Charities, we do not think it likely that the counterfactual ~$600k would have been donated to any of those Recommended Charities. Also, the ~$600k is in addition to their normal donations to our Recommended Charities.

Comment by ericherboso on Animal Equality showed that advocating for diet change works. But is it cost-effective? · 2018-06-08T00:43:46.740Z · score: 4 (4 votes) · EA · GW

Human DALYs deal with positive productive years added to a human life. Pig years saved deal with reducing suffering via fewer animals being born. I'm not sure that these are analogous enough to directly compare them in this way.

For example, if you follow negative average preference utilitarianism, the additional frustrated preferences averted through pig-years-saved would presumably be more valuable than an equivalent number of human-disability-adjusted-life-years, which would only slightly decrease the average frustrated preferences.

Different meta-ethical theories will deal with the difference between DALYs and pig-years-saved differently. This may affect how you view the comparison between them.

(With that said, I find these results sobering. Especially the part where video outperforms VR possibly due to a negative multiplier on VR.)

Comment by ericherboso on Announcing the 2017 donor lottery · 2017-12-21T20:19:30.305Z · score: 1 (1 votes) · EA · GW

I can't help but notice that one of the lottery entrants is listed as anonymous. According to the rules, entrants may remain anonymous even if they win, so long as they express a strong objection to their name being public before the draw date. (No entrants to the 2016 donor lottery were anonymous.)

I realize that which charitable cause the winner chooses to fund doesn't change the expected value of any entrant's contribution to the lottery. As Carl Shulman points out, the lottery's pot size and draw probability, as well as entrants' expected payout, are all unaffected even if the eventual winner does nothing effective with their donation.

Nevertheless, donor lotteries like this would seem to rely strongly on trust. Setting aside expected value calculations, there seems to be a strong cultural norm in my country against allowing lottery winners to remain anonymous. In the United States, only seven states allow this without an exemption being made—of course, that only applies to standard lotteries, not donor lotteries. But the point remains: there exists a common understanding in the US and Canada that lottery winners should not be allowed to remain anonymous without good reason.

This is not the case in Europe, where it is far more common for lottery winners to remain anonymous.

When the rules for anonymity were being drafted, was any thought given to this issue? Or was it just decided by default because the rules were drafted by people in a country for which this is just their cultural norm?

(I'm not necessarily against allowing anonymous winners; it just initially feels weird to me because of the cultural norm of the society in which I was raised, and I'm interested in knowing how much thought went into this decision.)

Comment by ericherboso on Donor lottery details · 2017-12-21T02:24:21.511Z · score: 4 (4 votes) · EA · GW

I'd be interested in learning your general thought process, though probably you should only answer these after you've allocated the entire lottery amount, and only if you feel that it makes sense to answer publicly.

  1. How much time would you say that you invested in determining where to give?
  2. How many advisors did you turn to in order to help think through these decisions? In retrospect, do you think that you took advice from too many different people, not enough, or just the right amount?
  3. Was The Chapter among the first potential causes you thought of?
  4. How many different organizations did you seriously consider? Of these, how many reached the stage where you interviewed them?

The Chapter sounds like an excellent giving opportunity for a gift of this size, since it's directly paying for a position that they would need to maintain their current level of effectiveness. I'm glad to know that my portion of the donor lottery funds are being used in such a positive manner.

Comment by ericherboso on Should EAs think twice before donating to GFI? · 2017-08-31T21:26:11.781Z · score: 3 (5 votes) · EA · GW

I work for ACE, but below are my immediate personal thoughts. This is not an official ACE response.

There is also a further option, that we consider whether EAs could prioritise meta-evaluation projects for ACE and other EA related groups. If we desire to optimise evidence based (rather than more ideologically weighted) opportunities for donors, it could be argued that we ought to limit donations until these criteria are met…

Just to be clear, you are proposing that EAs stop donating to ACE and ACE’s top charities and instead use the money to fund an external review of ACE. This is a dramatic proposition.

ACE believes transparency is extremely important. It would not be difficult for an external reviewer to go through ACE’s materials privately. We welcome such criticism, and when we find that we’ve made a mistake, we publicly announce those mistakes.

If you’re serious about performing an evaluation of ACE, you should be aware of our most recent internal evaluation as well as GiveWell’s stance on external evaluation.

With that said, I don’t believe that the effort/expense of going through an external review is warranted. Below I will explain why.

Like some others I was a little surprised…

In your opening line, you linked to Harrison Nathan’s essay “The Actual Number is Almost Surely Higher”. I and other staff members at ACE strongly disagree with the criticism he has made in this and other essays. Last year, we responded to his claims, pointing out why we felt they were inaccurate. Later, he gave an interview with SHARK, where we yet again responded to his criticism. When he continued to give the same critiques publicly, we gave an in-depth response that goes into full detail of why his continued claims are false.

If you share any of the criticisms Nathan made in his essays, I highly recommend reading our latest response.

…it would seem reasonable that EAs might choose not to fund GFI or the other top ACE charities, primarily because these are not neglected groups.

When ACE recommends a charity, the concept of neglectedness is already baked into that recommendation. One of the criteria ACE uses when evaluating charities includes checking to make sure that there is room for more funding and concrete plans for growth. This factor takes into account funding sources from outside of ACE.

The OPP’s grant to GFI was taken into account when making GFI a top charity. Bollard’s statement that he thought OPP would take care of GFI’s room for more funding in the medium term is from April 2017, after our latest recommendations were made. I’m not on ACE’s research team, so I don’t know the exact details behind this. But I can assure you that as ACE is updating our yearly recommendations in December 2017, this is exactly the kind of thing that will be taken into account, if they haven't already done so.

…it may well be the case that EAs ought to invest in developing more inclusive frameworks for intervention, and concentrate more resources on movement theorising. It is my belief that undertaking work to further explore these issues through a system of meta-evaluation could in turn create a stronger foundation for improved outcomes.

I agree that exploring more is particularly impactful when it comes to effective animal advocacy. But I disagree with your proposal on how to do this.

I’m most excited about additional research into potential intervention types, such as the work being done by the ACE Research Fund and ACE’s new Experimental Research Division. I think it makes a lot of sense for us to focus on more research, and my personal donations are geared more toward this area than the direct advocacy work that the top charities perform.

Your alternative proposal is to fund groups like Food Empowerment Project, Encompass, and Better Eating International specifically because “they tend to fall outside the welfare / abolition paradigm favored by EAA, ACE and Open Phil”, and thus presumably are relatively neglected. I strongly disagree with this line of thinking, even though I personally like these specific organizations. (I’ve personally donated to Encompass this year.)

80k Hours points out that being evidence-based doesn’t have nearly as large an impact as choosing the right cause area. When it comes to the welfare/abolition paradigm, avoiding welfare organizations is costly.

This isn’t to say that abolitionism isn’t a worthy goal; I personally would love to see a world where speciesism is eradicated and no animals are so callously harmed for food. But to get from here to there requires a welfare mindset; abolitionist techniques lack tractability.

One of the reasons why ACE likes being transparent is that we recognize that our philosophy might not correspond exactly to those of everyone else. By making our reasoning transparent, this makes it easier for others to insert their own philosophical underpinnings and assumptions to choose a more appropriate charity for them. This is one reason why we list so many standout charities; we believe that there are donors out there who have specific needs/desires that would make it more appropriate for them to fund a standout charity than any of our top charities. We are currently in the process of making it even easier to do this by creating a questionnaire that allows users to answer a few philosophical questions, allowing us to customize a recommendation specifically tailored to them.

Comment by ericherboso on Setting our salary based on the world’s average GDP per capita · 2017-08-28T15:45:41.536Z · score: 7 (13 votes) · EA · GW

While I certainly don't want to argue against other EAs taking up this example and choosing to live more frugally in order to achieve more overall good, I nevertheless want to remind the EA community that marketing EA to the public requires that we spend our idiosyncrasy credits wisely.

We only have so many weirdness points to spend. When we spend them on particularly extreme things like intentionally living on such a small amount, it makes it more difficult to get EA newcomers into the other aspects of EA that are more important, like strategic cause selection.

I do not want to dissuade anyone from taking the path of giving away everything above $10k/person, so long as they truly are in a position to do this. But doing so requires a social safety net that, as Evan points out elsewhere in this thread, is generally only available to those in good health and fully able-bodied. I will add that this kind of thing is also generally available only when one is from a certain socio-economic background, and that this kind of messaging may be somewhat antithetical to the goal of inclusion that some of us in the movement are attempting with diversity initiatives.

If living extremely frugally were extremely effective, then maybe we'd want to pursue it more generally despite the above arguments. But the marginal value of giving everything over $10k/person versus the existing EA norm of giving 10-50% isn't that much when you take into account that the former hinders EA outreach by being too demanding. Instead, we should focus on the effectiveness aspect, not the demandingness aspect.

Nevertheless, I think it is important for the EA movement to have heroes that go the distance like this! If you think you may potentially become one of them, then don't let this post discourage you. Even if I believe this aspect of EA culture should be considered supererogatory (or whatever the consequentialist analog is), I nevertheless am proud to be part of a movement that takes sacrifice at this level so seriously.

Comment by ericherboso on The history of the term 'effective altruism' · 2017-08-12T01:09:14.709Z · score: 1 (3 votes) · EA · GW

notacleverthrow-away on Reddit points out that there's an even earlier usage of the term on the SL4 wiki by Anand from way back in January 2003! Here's the page on EffectiveAltruism on

Comment by ericherboso on Save the Date for EA Global Boston and San Francisco · 2017-03-22T07:47:51.712Z · score: 2 (1 votes) · EA · GW

It may be worthwhile to change the banner image at the top of this forum to an image that informs people of upcoming EA Global dates. That way the information stays visible even when lots of other topics begin pushing this post down on the homepage.

Comment by ericherboso on EA Global 2017 Update · 2017-02-23T17:27:30.563Z · score: 11 (6 votes) · EA · GW

Please try to announce specific EAG dates soon.

My original plan was to prioritize EAG over any other conferences happening at the same time. But early bird pricing and limited ticket availability on other conferences has forced me to purchase tickets to three separate conferences in June, July, and August. I am hoping that these will not conflict with EAG, but, if they do, now I will have to skip EAG rather than these other conferences.

I'm sure I'm not the only one in this position. EAG is likely losing out on attendees because it is taking so long to finalize dates.

Comment by ericherboso on Why I left EA · 2017-02-20T06:49:10.341Z · score: 29 (28 votes) · EA · GW

Thank you, Lila, for your openness on explaining your reasons for leaving EA. It's good to hear legitimate reasons why someone might leave the community. It's certainly better than the outsider anti-EA arguments that do tend to misrepresent EA too often. I hope that other insiders who leave the movement will also be kind enough to share their reasoning, as you have here.

While I recognize that Lila does not want to participate in a debate, I nevertheless would like to contribute an alternate perspective for the benefit of other readers.

Like Lila, I am a moral anti-realist. Yet while she has left the movement largely for this reason, I still identify strongly with the EA movement.

This is because I do not feel that utilitarianism is required to prop up as many of EA's ideas as Lila does. For example, non-consequentialist moral realists can still use expected value to try and maximize good done without thinking that the maximization itself is the ultimate source of that good. Presumably if you think lying is bad, then refraining from lying twice may be better than refraining from lying just once.

I agree with Lila that many EAs act too glib about deaths from violence being no worse than deaths from non-violence. But to the extent that this is true, we can just weight these differently. For example, Lila rightly points out that "violence causes psychological trauma and other harms, which must be accounted for in a utilitarian framework". EAs should definitely take into account these extra considerations about violence.

But the main difference between myself and Lila here is that when she sees EAs not taking things like this into consideration, she takes that as an argument against EA; against utilitarianism; against expected value. Whereas I take it as an improper expected value estimate that doesn't take into account all of the facts. For me, this is not an argument against EA, nor even an argument against expected value -- it's an argument for why we need to be careful about taking into account as many considerations as possible when constructing expected value estimates.

As a moral anti-realist, I have to figure out how to act not by discovering rules of morality, but by deciding on what should be valued. If I wanted, I suppose I could just choose to go with whatever felt intuitively correct, but evolution is messy, and I trust a system of logic and consistency more than any intuitions that evolution has forced upon me. While I still use my intuitions because they make me feel good, when my intuitions clash with expected value estimates, I feel much more comfortable going with the EV estimates. I do not agree with everything individual EAs say, but I largely agree with the basic ideas behind EA arguments.

There are all sorts of moral anti-realists. Almost by definition, it's difficult to predict what any given moral anti-realist would value. I endorse moral anti-realism, and I just want to emphasize that EAs can become moral anti-realist without leaving the EA movement.

Comment by ericherboso on Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things) · 2017-01-13T19:30:50.996Z · score: 7 (7 votes) · EA · GW

I agree: it is indeed reasonable for people to have read our estimates the way they did. But when I said that we don't want others to "get the wrong idea", I'm not claiming that the readers were at fault. I'm claiming that the ACE communications staff was at fault.

Internally, the ACE research team was fairly clear about what we thought about leafleting in 2014. But the communications staff (and, in particular, I) failed to adequately get across these concerns at the time.

Later, in 2015 and 2016, I feel that whenever an issue like leafleting came up publicly, ACE was good about clearly expressing our reservations. But we neglected to update the older 2014 page with the same kind of language that we now use when talking about these things. We are now doing what we can to remedy this, first by including a disclaimer at the top of the older leafleting pages, and second by planning a full update of the leafleting intervention page in the near future.

Per your concern about cost-effectiveness estimates, I do want to say that our research team will be making such calculations public on our Guesstimate page as time permits. But for the time being, we had to take down our internal impact calculator because the way that we used it internally did not match the ways others (like Slate Star Codex) were using it. We were trying to err on the side of openness by keeping it public for as long as we did, but in retrospect there just wasn't a good way for others to use the tool in the way we used it internally. Thankfully, the Guesstimate platform includes upper and lower bounds directly in the presented data, so we feel it will be much more appropriate for us to share with the public.

You said "I think the error was in the estimate rather than in expectation management" because you felt the estimate itself wasn't good; but I hope this makes it more clear that we feel that the way we were internally using upper and lower bounds was good; it's just that the way we were talking about these calculations was not.

Internally, when we look at and compare animal charities, we continue to use cost effectiveness estimates as detailed on our evaluation criteria page. We intend to publicly display these kinds of calculations on Guesstimate in the future.

As you've said, the lesson should not be for people to trust things others say less in general. I completely agree with this sentiment. Instead, when it comes to us, the lessons we're taking are: (1) communications staff needs to better explain our current stance on existing pages, (2) comm staff should better understand that readers may draw conclusions solely from older pages, without reading our more current thinking on more recently published pages, and (3) research staff should be more discriminating on what types of internal tools are appropriate for public use. There may also be further lessons that can be learned from this as ACE staff continues to discuss these issues internally. But, for now, this is what we're currently thinking.

Comment by ericherboso on Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things) · 2017-01-13T01:14:52.965Z · score: 5 (5 votes) · EA · GW

Well said, Erika. I'm happy with most of these changes, though I'm sad that we have had to remove the impact calculator in order to ensure others don't get the wrong idea about how seriously such estimates should be taken. Thankfully, Allison plans on implementing a replacement for it at some point using the Guesstimate platform.

For those interested in seeing the exact changes ACE has made to the site, see the disclaimer at the top of the leafleting intervention page and the updates to our mistakes page.

Comment by ericherboso on EAs write about where they give · 2016-12-23T17:37:45.205Z · score: 1 (1 votes) · EA · GW

Animal Charity Evaluators' post like this, for 2016, is here.

Comment by ericherboso on Donor lotteries: demonstration and FAQ · 2016-12-07T19:11:06.547Z · score: 5 (5 votes) · EA · GW

I'd like to contribute $1k. Would you like to coordinate together so we can meet the $5k threshold?

Edit: After further consideration, I decided to instead donate $500 to the donor lottery while increasing my direct donations elsewhere.

Comment by ericherboso on .impact updates 3 of 3: Impact Missions, peer-to-peer fundraisers, matching donations · 2016-11-16T21:28:05.621Z · score: 0 (0 votes) · EA · GW

Please include a question about race. At the Effective Animal Advocacy Symposium this past weekend at Princeton, the 2015 EA Survey was specifically called out for neglecting to ask a question about the race of the respondents.

Comment by ericherboso on Reorganizing EA NTNU into agile self-organizing teams · 2016-11-16T00:15:06.923Z · score: 3 (3 votes) · EA · GW

There're lots of great stuff you guys are doing, but I'd like to comment on one thing in particular: your t-shirts. They look awesome.

I know some EAs think they are low value, but, as an introvert, having a great EA t-shirt helps to initiate conversations with acquaintances when they ask about it. Plus, I imagine it would help build camaraderie between members of any local EA group.

Very cool. (c:

Comment by ericherboso on The 2015 Survey of Effective Altruists: Results and Analysis · 2016-11-12T22:22:00.213Z · score: 1 (1 votes) · EA · GW

At the Effective Animal Advocacy Symposium, Garrett Broad pointed out in his talk that the 2015 Survey of Effective Altruists did not ask about race, which is worrying given how overwhelmingly white the movement is. To my knowledge this makes at least two public critiques of the movement on this specific topic.

He points out that the best way to deal with race issues is not to ignore the issue, but to bring it front and center. Could we please be sure to include a question about race on the 2016 version of this survey?

EDIT: Here's an image. I'll upload a video of his talk once ACE puts the videos of the conference online.

EDIT: The video is here. It's titled "Advocacy for Education" and Garret Broad's section of the talk begins at 33:20.

Comment by ericherboso on What does Trump mean for EA? · 2016-11-11T05:38:32.826Z · score: 14 (14 votes) · EA · GW

A few of us have experience working in politics and could conceivably accomplish some good by being an influencer in Trump's White House. Others of us have the ability to pitch Thiel on stuff. Since Thiel has sway in the Trump transition, this means we could conceivably get an EA or two into positions of influence in the Trump administration.

I'm not sure that it would be a good idea to actually do this, but I'm mentioning it because it doesn't seem outside the realm of possibility to actually do it, and it could plausibly be highly effective. Here are some of the questions we'd need to answer:

  1. If we had an EA inside the Trump administration, would it do more good than if they stayed in their current position instead? This depends partly on what they're currently doing and partly on how likely we estimate they could actually make a difference in policy. My intuition is that if we expect Trump's policies to be very bad, then even a small influence could translate into a large amount of good.
  2. Who would be best suited for this, if we decided we wanted to try it? I'm not sure of what would count as experienced enough to do something like this. There are a few people at Effective Altruism Policy Analytics, and I believe there are a couple of people that have experience with lobbying in DC.
  3. Who would make the pitch to Thiel?
  4. What would the pitch consist of? We'd need to know exactly what parts of EA Thiel cares most about, and then we'd need to stress those aspects.
  5. How likely would Thiel be swayed by such a pitch? If he endorsed Trump because he wanted influence in the administration, then I believe Thiel would be fully on board with this idea. But if he endorsed Trump because he actually believes Trump's positions are good, then I can see where he would hate this idea.
  6. Would Thiel be able to get the EA in a position high enough to actually influence policy? How high up would the position have to be in order for it to be influential? How influential are mid-level staffers?

I don't know the answer to these questions. I don't know if this is even a workable idea. I certainly would hate to convince an EA to drop their current work for this if it doesn't turn out to be an influential position. But it seems possible that this could be a high-value opportunity, so I'm bringing it to everyone here.

Comment by ericherboso on Looking for Wikipedia article writers (topics include many of interest to effective altruists) · 2016-04-17T14:38:55.745Z · score: 1 (1 votes) · EA · GW

For anyone working on pages for EA organizations, keep in mind that (1) you probably shouldn't be an employee of that organization and (2) considerable attention should be included in a criticism section. The ACE page was removed in part because the article was "too positive", and people like me were prohibited from adding substantive critical content to it because of my affiliation with the organization (per their conflict of interest policy).

I would not recommend relisting ACE in particular without a criticism section that cites criticism from several different sources. See the AfD page for ACE and compare to the AfD for 80k Hours for more details. (Note that the 80k Hours article survived deletion by being much less positive.)

Comment by ericherboso on Effective Altruism Merchandise Ideas · 2015-10-21T16:22:45.209Z · score: 3 (3 votes) · EA · GW

In general, I like the idea of having something to wear that promotes discussion. It's especially useful for introverts like myself who have trouble bringing up EA in social contexts, but have no problem responding to questions about my shirt and using that as an introduction to EA.

The shirt given out at EA Global was good, as it just has the term "Effective Altruism" which tends to prompt questions. The shirt GiveWell sells is also excellent, as they are a very well named organization. But "Doing Good Effectively by Using Reason" seems clunky to me. I believe it is too long and it feels more pompous than something shorter like "GiveWell".

Contrary to what you've written, I actually think something like "Optimizing QALYs" might actually be good. It's short, easy to read, doesn't sound pompous, and will definitely prompt a question of what it means in a social situation. That is the kind of shirt that I'd actually wear and find useful.

Other shirts I'd find useful would be for various well-named organizations, like "Animal Charity Evaluators", "Giving What We Can", "Charity Science", etc. These don't even need slogans; they can use their name/logo alone, and I think I'd find the shirt useful.

Comment by ericherboso on The EU is legally obligated to double foreign aid spending · 2015-09-15T20:11:23.638Z · score: 2 (2 votes) · EA · GW

Even if it is not legally enforceable, doesn't the 0.7% of GNP figure act as a sort of schelling point here? If so, it could be used in the same way we currently use the "give 10% of your income" meme, as just an anchoring number to show the ballpark we're interested in and not have it sound like we just made up a number from whole cloth.