Posts

Effective Advertising and Animal Charity Evaluators 2018-06-13T19:43:07.214Z · score: 19 (18 votes)
Animal Charity Evaluators Introduces the Recommended Charity Quiz 2018-03-15T13:46:41.098Z · score: 6 (5 votes)
[LINK] AMA by Animal Charity Evaluators on Reddit 2017-11-30T16:45:53.521Z · score: 6 (5 votes)
A Defense of Normality 2015-09-07T21:00:29.231Z · score: 24 (30 votes)

Comments

Comment by ericherboso on A list of EA-related podcasts · 2019-12-03T08:05:14.075Z · score: 14 (7 votes) · EA · GW

Luke Muehlhauser's Conversations from the Pale Blue Dot had an episode interviewing Toby Ord back in January 2011. This is from before the term "effective altruism" was being used to describe the movement. I think it may be the first podcast episode to really discuss what would eventually be called EA, with the second oldest podcast episode being Massimo Pigliucci's interview with Holden Karnofsky on Rationally Speaking in July 2011.

(There was plenty of discussion online about these issues in years prior to this, but as far as I can tell, discussion didn't appear in podcast form until 2011.)

Comment by ericherboso on The Importance of EA Dedication and Why it Should Be Encouraged · 2018-08-23T18:28:20.733Z · score: 0 (0 votes) · EA · GW

I believe there is a threshold difference between passionate and self-disciplined EAs. As excited EAs become more dedicated, they tend to hit a wall where their frugality starts to affect them personally much more than it previously have. This wall takes effort to overcome, if it is overcome at all.

Meanwhile, when an obligatory EA becomes more dedicated, that wall doesn't exist (or at least it has less force). So it's easier for self-disciplined EAs to get to more extreme levels than for passionate EAs.

Comment by ericherboso on Harvard EA's 2018–19 Vision · 2018-08-05T21:53:35.971Z · score: 3 (3 votes) · EA · GW

Please feel free to steal the html used for footnotes in EA forum posts like this one.

  • In-page anchor links: <a id="ref1" href="#fn1">&sup1;</a>
  • Linked footnote: <p id="fn1">&sup1; <small>Footnote text.</small></p>
  • Footnote link back to article text: <a href="#ref1">↩</a>
Comment by ericherboso on The EA Community and Long-Term Future Funds Lack Transparency and Accountability · 2018-07-31T11:16:00.777Z · score: 3 (3 votes) · EA · GW

This is now posted on the Animal Welfare Fund Payout Report page.

Comment by ericherboso on The EA Community and Long-Term Future Funds Lack Transparency and Accountability · 2018-07-31T11:13:02.486Z · score: 15 (13 votes) · EA · GW

While I personally have trust that Nick Beckstead has been acting in good faith, I also completely understand why donors might choose to stop donating because of this extreme lack of regular communication.

It's important for EAs to realize that even when you have good intentions and are making good choices about what to do, if you aren't effectively communicating your thinking to stakeholders, then you aren't doing all that you should be doing. Communications are vitally important, and I hope that comments like this one really help to drive this point home to not just EA Funds distributors, but also others in the EA community.

Comment by ericherboso on EA Hotel with free accommodation and board for two years · 2018-06-21T04:06:23.763Z · score: 5 (11 votes) · EA · GW

Not all EAs are on board with AI risk, but it would be rude for this EA hotel to commit to funding general AI research on the side. Whether all EAs are on board with effective animal advocacy isn't the key point when deciding whether the hotel's provided meals are vegan.

An EA who doesn't care about veganism will be mildly put off if the hotel doesn't serve meat. But an EA who believes that veganism is important would be very strongly put off if the hotel served meat. The relative difference in how disturbed the latter person would be is presumably at least 5 times as strong as the minor inconvenience that the former person would feel. This means that even if only 20% of EAs are vegan, the expected value from keeping meals vegan would beat out the convenience factor of including meat for nonvegans.

Comment by ericherboso on Effective Advertising and Animal Charity Evaluators · 2018-06-19T23:20:45.255Z · score: 1 (1 votes) · EA · GW

You raise a number of points; I’ll try to respond to each of them.

For people who are already donating to animal organisations which aren't shelters then it isn't necessarily better to give to "effective" organisations as put forward by ACE because there aren't sufficient comparisons that can be made between organisations they are already supporting.

We do not believe this is true. We explicitly rank our top charities as being better targets for effective giving than our standout charities, and we explicitly rank our standout charities as better targets than organizations not on our Recommended Charity list.

This doesn’t mean that more effective EAA charities necessarily don’t exist. We’re currently expanding our focus to several organizations across the world to which we hadn’t previously looked. (There's still time to submit charities for review in 2018.) There are also some charities that we were not able to evaluate last year for one reason or another. These charities may or may perform better than our current Top Charities. We encourage you to learn more about how we evaluate charities.

As an example, I continue to wonder why someone would necessarily believe it is better to give to GFI over an organisation doing pluralistic work in the animal movement? One is well supported by various foundations and is far from underconsidered or neglected, whilst others that work on more meta level questions of plurality and inclusivity tend to be marginalised, particularly through not reflecting a favoured "mainstream" ideology.

GFI rates well on all of our criteria. If you want to compare them to another group doing pluralistic work, then you’d need to directly compare our reviews of each organization. Alternatively, you are free to perform your own analysis to compare relative potential effectiveness; if performed well, such analyses could then be used in future reviews by ACE.

Keep in mind that we explicitly believe a pluralistic approach is best overall. It's just that individual charities working on pluralistic approaches may have wildly different levels of effectiveness, and, given limited resources, we should prioritize whatever results in the most good.

Another issue is that ACE doesn't account for moral theory in relation to rights or utilitarianism thus largely presenting a fairly unfortunate picture in the animal movement in terms of utilitarian = effective and rights = ineffective.

We are quite transparent about the philosophical foundations of our work. We explicitly maintain that the most effective approach is probably a pluralistic one, and we hope that a diverse group of animal charities will continue pursuing a wide range of interventions to help all populations of animals. However, we will continue to recommend that marginal resources support the most effective tactics.

This is not an issue of rights vs utility. Whether you believe in rights or in utility, presumably you would want to do twice as much good with limited resources if you get the chance.

(A quick aside on deontology vs consequentialism as it relates to cause prioritization: Let's say you're a deontologist who believes murder is wrong. You're given a coupon that you can redeem at one of two locations. If you redeem at the first, you prevent a murder. If you redeem at the second, you prevent two murders. Can you honestly say that, even as a deontologist, you wouldn't prefer to redeem at the second location?)

The suffering of all animals is important, whether those animals are companion animals, animals in a lab, animals used in entertainment, or farmed animals. But when you have limited resources, you should prioritize helping those animals for which you can effectively reduce suffering. This is true whether you're talking about a rights organization or a utilitarian organization (to use your terminology).

I support the idea of evaluation by ACE but i'm sceptical that the claims that ACE tend to make sufficiently reflect the work that has taken place, or that there is enough transparency in terms of the underlying values and beliefs that ACE tend to represent. I continue to believe that some form of external meta-evaluation would be useful for ACE.

If there are specific claims that you believe do not reflect the work that we do, you are always welcome to give feedback. We also strive to be as transparent as possible in everything that we do. With regard to outside evaluation, we have explicitly asked for external reviewers and have a public list of external reviewers on our site.

I hope that these responses help to alleviate some of your concerns.

Comment by ericherboso on Effective Advertising and Animal Charity Evaluators · 2018-06-19T22:26:19.776Z · score: 1 (1 votes) · EA · GW

sidenote: I’d be interested to what extent ACE now uses Bayesian reasoning in their estimates, e.g. by adjusting impact by how likely small sample studies are false positives.

Our current methodology uses an alternative approach of treating cost-effectiveness estimates as only one input into our decisions. We then take care to "notice when we are confused" by remaining aware that if a cost-effectiveness estimate is much higher than we would expect based on the other things we know about an intervention or charity, that may be due to an error in our estimate rather than to truly exceptional cost effectiveness.

We admit that Bayesian techniques would more accurately adjust for uncertainty, but this would require additional work in developing appropriate priors for each reference class, and this process may not generate worthwhile differences in our evaluations, given our data set. See this section of our Cost-Effectiveness Estimates page for details on our thinking about this.

Comment by ericherboso on Effective Advertising and Animal Charity Evaluators · 2018-06-19T21:59:52.264Z · score: 1 (1 votes) · EA · GW

We had on the order of hundreds of new donors during our 2017 matching campaign, making up 56% of the pre-matched amount raised. A very large portion of these donors are new to effective giving, as most come from the AR space.

We track donor engagement with EAA directly through retention and surveys, and we have limited indirect tracking of engagement with EA more generally. (Concerns about privacy (and GDPR) prevent us from tracking more deeply, such as through social media engagement.)

We also actively advocate EAA and EA ideas to these donors via email and other messaging.

Comment by ericherboso on Effective Advertising and Animal Charity Evaluators · 2018-06-19T20:02:46.793Z · score: 2 (2 votes) · EA · GW

This donor is a major general animal welfare donor, and had the ~$600k they gave to the Recommended Charity Fund not occurred, they likely would have given it to other non-EAA animal charities, or they may have just left the money in their foundation for future donations.

While they do support some of our Top Charities and Standout Charities, we do not think it likely that the counterfactual ~$600k would have been donated to any of those Recommended Charities. Also, the ~$600k is in addition to their normal donations to our Recommended Charities.

Comment by ericherboso on Animal Equality showed that advocating for diet change works. But is it cost-effective? · 2018-06-08T00:43:46.740Z · score: 4 (4 votes) · EA · GW

Human DALYs deal with positive productive years added to a human life. Pig years saved deal with reducing suffering via fewer animals being born. I'm not sure that these are analogous enough to directly compare them in this way.

For example, if you follow negative average preference utilitarianism, the additional frustrated preferences averted through pig-years-saved would presumably be more valuable than an equivalent number of human-disability-adjusted-life-years, which would only slightly decrease the average frustrated preferences.

Different meta-ethical theories will deal with the difference between DALYs and pig-years-saved differently. This may affect how you view the comparison between them.

(With that said, I find these results sobering. Especially the part where video outperforms VR possibly due to a negative multiplier on VR.)

Comment by ericherboso on Announcing the 2017 donor lottery · 2017-12-21T20:19:30.305Z · score: 1 (1 votes) · EA · GW

I can't help but notice that one of the lottery entrants is listed as anonymous. According to the rules, entrants may remain anonymous even if they win, so long as they express a strong objection to their name being public before the draw date. (No entrants to the 2016 donor lottery were anonymous.)

I realize that which charitable cause the winner chooses to fund doesn't change the expected value of any entrant's contribution to the lottery. As Carl Shulman points out, the lottery's pot size and draw probability, as well as entrants' expected payout, are all unaffected even if the eventual winner does nothing effective with their donation.

Nevertheless, donor lotteries like this would seem to rely strongly on trust. Setting aside expected value calculations, there seems to be a strong cultural norm in my country against allowing lottery winners to remain anonymous. In the United States, only seven states allow this without an exemption being made—of course, that only applies to standard lotteries, not donor lotteries. But the point remains: there exists a common understanding in the US and Canada that lottery winners should not be allowed to remain anonymous without good reason.

This is not the case in Europe, where it is far more common for lottery winners to remain anonymous.

When the rules for anonymity were being drafted, was any thought given to this issue? Or was it just decided by default because the rules were drafted by people in a country for which this is just their cultural norm?

(I'm not necessarily against allowing anonymous winners; it just initially feels weird to me because of the cultural norm of the society in which I was raised, and I'm interested in knowing how much thought went into this decision.)

Comment by ericherboso on Donor lottery details · 2017-12-21T02:24:21.511Z · score: 4 (4 votes) · EA · GW

I'd be interested in learning your general thought process, though probably you should only answer these after you've allocated the entire lottery amount, and only if you feel that it makes sense to answer publicly.

  1. How much time would you say that you invested in determining where to give?
  2. How many advisors did you turn to in order to help think through these decisions? In retrospect, do you think that you took advice from too many different people, not enough, or just the right amount?
  3. Was The Chapter among the first potential causes you thought of?
  4. How many different organizations did you seriously consider? Of these, how many reached the stage where you interviewed them?

The Chapter sounds like an excellent giving opportunity for a gift of this size, since it's directly paying for a position that they would need to maintain their current level of effectiveness. I'm glad to know that my portion of the donor lottery funds are being used in such a positive manner.

Comment by ericherboso on Should EAs think twice before donating to GFI? · 2017-08-31T21:26:11.781Z · score: 3 (5 votes) · EA · GW

I work for ACE, but below are my immediate personal thoughts. This is not an official ACE response.

There is also a further option, that we consider whether EAs could prioritise meta-evaluation projects for ACE and other EA related groups. If we desire to optimise evidence based (rather than more ideologically weighted) opportunities for donors, it could be argued that we ought to limit donations until these criteria are met…

Just to be clear, you are proposing that EAs stop donating to ACE and ACE’s top charities and instead use the money to fund an external review of ACE. This is a dramatic proposition.

ACE believes transparency is extremely important. It would not be difficult for an external reviewer to go through ACE’s materials privately. We welcome such criticism, and when we find that we’ve made a mistake, we publicly announce those mistakes.

If you’re serious about performing an evaluation of ACE, you should be aware of our most recent internal evaluation as well as GiveWell’s stance on external evaluation.

With that said, I don’t believe that the effort/expense of going through an external review is warranted. Below I will explain why.

Like some others I was a little surprised…

In your opening line, you linked to Harrison Nathan’s essay “The Actual Number is Almost Surely Higher”. I and other staff members at ACE strongly disagree with the criticism he has made in this and other essays. Last year, we responded to his claims, pointing out why we felt they were inaccurate. Later, he gave an interview with SHARK, where we yet again responded to his criticism. When he continued to give the same critiques publicly, we gave an in-depth response that goes into full detail of why his continued claims are false.

If you share any of the criticisms Nathan made in his essays, I highly recommend reading our latest response.

…it would seem reasonable that EAs might choose not to fund GFI or the other top ACE charities, primarily because these are not neglected groups.

When ACE recommends a charity, the concept of neglectedness is already baked into that recommendation. One of the criteria ACE uses when evaluating charities includes checking to make sure that there is room for more funding and concrete plans for growth. This factor takes into account funding sources from outside of ACE.

The OPP’s grant to GFI was taken into account when making GFI a top charity. Bollard’s statement that he thought OPP would take care of GFI’s room for more funding in the medium term is from April 2017, after our latest recommendations were made. I’m not on ACE’s research team, so I don’t know the exact details behind this. But I can assure you that as ACE is updating our yearly recommendations in December 2017, this is exactly the kind of thing that will be taken into account, if they haven't already done so.

…it may well be the case that EAs ought to invest in developing more inclusive frameworks for intervention, and concentrate more resources on movement theorising. It is my belief that undertaking work to further explore these issues through a system of meta-evaluation could in turn create a stronger foundation for improved outcomes.

I agree that exploring more is particularly impactful when it comes to effective animal advocacy. But I disagree with your proposal on how to do this.

I’m most excited about additional research into potential intervention types, such as the work being done by the ACE Research Fund and ACE’s new Experimental Research Division. I think it makes a lot of sense for us to focus on more research, and my personal donations are geared more toward this area than the direct advocacy work that the top charities perform.

Your alternative proposal is to fund groups like Food Empowerment Project, Encompass, and Better Eating International specifically because “they tend to fall outside the welfare / abolition paradigm favored by EAA, ACE and Open Phil”, and thus presumably are relatively neglected. I strongly disagree with this line of thinking, even though I personally like these specific organizations. (I’ve personally donated to Encompass this year.)

80k Hours points out that being evidence-based doesn’t have nearly as large an impact as choosing the right cause area. When it comes to the welfare/abolition paradigm, avoiding welfare organizations is costly.

This isn’t to say that abolitionism isn’t a worthy goal; I personally would love to see a world where speciesism is eradicated and no animals are so callously harmed for food. But to get from here to there requires a welfare mindset; abolitionist techniques lack tractability.

One of the reasons why ACE likes being transparent is that we recognize that our philosophy might not correspond exactly to those of everyone else. By making our reasoning transparent, this makes it easier for others to insert their own philosophical underpinnings and assumptions to choose a more appropriate charity for them. This is one reason why we list so many standout charities; we believe that there are donors out there who have specific needs/desires that would make it more appropriate for them to fund a standout charity than any of our top charities. We are currently in the process of making it even easier to do this by creating a questionnaire that allows users to answer a few philosophical questions, allowing us to customize a recommendation specifically tailored to them.

Comment by ericherboso on Setting our salary based on the world’s average GDP per capita · 2017-08-28T15:45:41.536Z · score: 7 (13 votes) · EA · GW

While I certainly don't want to argue against other EAs taking up this example and choosing to live more frugally in order to achieve more overall good, I nevertheless want to remind the EA community that marketing EA to the public requires that we spend our idiosyncrasy credits wisely.

We only have so many weirdness points to spend. When we spend them on particularly extreme things like intentionally living on such a small amount, it makes it more difficult to get EA newcomers into the other aspects of EA that are more important, like strategic cause selection.

I do not want to dissuade anyone from taking the path of giving away everything above $10k/person, so long as they truly are in a position to do this. But doing so requires a social safety net that, as Evan points out elsewhere in this thread, is generally only available to those in good health and fully able-bodied. I will add that this kind of thing is also generally available only when one is from a certain socio-economic background, and that this kind of messaging may be somewhat antithetical to the goal of inclusion that some of us in the movement are attempting with diversity initiatives.

If living extremely frugally were extremely effective, then maybe we'd want to pursue it more generally despite the above arguments. But the marginal value of giving everything over $10k/person versus the existing EA norm of giving 10-50% isn't that much when you take into account that the former hinders EA outreach by being too demanding. Instead, we should focus on the effectiveness aspect, not the demandingness aspect.

Nevertheless, I think it is important for the EA movement to have heroes that go the distance like this! If you think you may potentially become one of them, then don't let this post discourage you. Even if I believe this aspect of EA culture should be considered supererogatory (or whatever the consequentialist analog is), I nevertheless am proud to be part of a movement that takes sacrifice at this level so seriously.

Comment by ericherboso on The history of the term 'effective altruism' · 2017-08-12T01:09:14.709Z · score: 2 (2 votes) · EA · GW

notacleverthrow-away on Reddit points out that there's an even earlier usage of the term on the SL4 wiki by Anand from way back in January 2003! Here's the page on EffectiveAltruism on sl4.org.

Comment by ericherboso on Save the Date for EA Global Boston and San Francisco · 2017-03-22T07:47:51.712Z · score: 2 (1 votes) · EA · GW

It may be worthwhile to change the banner image at the top of this forum to an image that informs people of upcoming EA Global dates. That way the information stays visible even when lots of other topics begin pushing this post down on the homepage.

Comment by ericherboso on EA Global 2017 Update · 2017-02-23T17:27:30.563Z · score: 11 (6 votes) · EA · GW

Please try to announce specific EAG dates soon.

My original plan was to prioritize EAG over any other conferences happening at the same time. But early bird pricing and limited ticket availability on other conferences has forced me to purchase tickets to three separate conferences in June, July, and August. I am hoping that these will not conflict with EAG, but, if they do, now I will have to skip EAG rather than these other conferences.

I'm sure I'm not the only one in this position. EAG is likely losing out on attendees because it is taking so long to finalize dates.

Comment by ericherboso on Why I left EA · 2017-02-20T06:49:10.341Z · score: 26 (26 votes) · EA · GW

Thank you, Lila, for your openness on explaining your reasons for leaving EA. It's good to hear legitimate reasons why someone might leave the community. It's certainly better than the outsider anti-EA arguments that do tend to misrepresent EA too often. I hope that other insiders who leave the movement will also be kind enough to share their reasoning, as you have here.

While I recognize that Lila does not want to participate in a debate, I nevertheless would like to contribute an alternate perspective for the benefit of other readers.

Like Lila, I am a moral anti-realist. Yet while she has left the movement largely for this reason, I still identify strongly with the EA movement.

This is because I do not feel that utilitarianism is required to prop up as many of EA's ideas as Lila does. For example, non-consequentialist moral realists can still use expected value to try and maximize good done without thinking that the maximization itself is the ultimate source of that good. Presumably if you think lying is bad, then refraining from lying twice may be better than refraining from lying just once.

I agree with Lila that many EAs act too glib about deaths from violence being no worse than deaths from non-violence. But to the extent that this is true, we can just weight these differently. For example, Lila rightly points out that "violence causes psychological trauma and other harms, which must be accounted for in a utilitarian framework". EAs should definitely take into account these extra considerations about violence.

But the main difference between myself and Lila here is that when she sees EAs not taking things like this into consideration, she takes that as an argument against EA; against utilitarianism; against expected value. Whereas I take it as an improper expected value estimate that doesn't take into account all of the facts. For me, this is not an argument against EA, nor even an argument against expected value -- it's an argument for why we need to be careful about taking into account as many considerations as possible when constructing expected value estimates.

As a moral anti-realist, I have to figure out how to act not by discovering rules of morality, but by deciding on what should be valued. If I wanted, I suppose I could just choose to go with whatever felt intuitively correct, but evolution is messy, and I trust a system of logic and consistency more than any intuitions that evolution has forced upon me. While I still use my intuitions because they make me feel good, when my intuitions clash with expected value estimates, I feel much more comfortable going with the EV estimates. I do not agree with everything individual EAs say, but I largely agree with the basic ideas behind EA arguments.

There are all sorts of moral anti-realists. Almost by definition, it's difficult to predict what any given moral anti-realist would value. I endorse moral anti-realism, and I just want to emphasize that EAs can become moral anti-realist without leaving the EA movement.

Comment by ericherboso on Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things) · 2017-01-13T19:30:50.996Z · score: 7 (7 votes) · EA · GW

I agree: it is indeed reasonable for people to have read our estimates the way they did. But when I said that we don't want others to "get the wrong idea", I'm not claiming that the readers were at fault. I'm claiming that the ACE communications staff was at fault.

Internally, the ACE research team was fairly clear about what we thought about leafleting in 2014. But the communications staff (and, in particular, I) failed to adequately get across these concerns at the time.

Later, in 2015 and 2016, I feel that whenever an issue like leafleting came up publicly, ACE was good about clearly expressing our reservations. But we neglected to update the older 2014 page with the same kind of language that we now use when talking about these things. We are now doing what we can to remedy this, first by including a disclaimer at the top of the older leafleting pages, and second by planning a full update of the leafleting intervention page in the near future.

Per your concern about cost-effectiveness estimates, I do want to say that our research team will be making such calculations public on our Guesstimate page as time permits. But for the time being, we had to take down our internal impact calculator because the way that we used it internally did not match the ways others (like Slate Star Codex) were using it. We were trying to err on the side of openness by keeping it public for as long as we did, but in retrospect there just wasn't a good way for others to use the tool in the way we used it internally. Thankfully, the Guesstimate platform includes upper and lower bounds directly in the presented data, so we feel it will be much more appropriate for us to share with the public.

You said "I think the error was in the estimate rather than in expectation management" because you felt the estimate itself wasn't good; but I hope this makes it more clear that we feel that the way we were internally using upper and lower bounds was good; it's just that the way we were talking about these calculations was not.

Internally, when we look at and compare animal charities, we continue to use cost effectiveness estimates as detailed on our evaluation criteria page. We intend to publicly display these kinds of calculations on Guesstimate in the future.

As you've said, the lesson should not be for people to trust things others say less in general. I completely agree with this sentiment. Instead, when it comes to us, the lessons we're taking are: (1) communications staff needs to better explain our current stance on existing pages, (2) comm staff should better understand that readers may draw conclusions solely from older pages, without reading our more current thinking on more recently published pages, and (3) research staff should be more discriminating on what types of internal tools are appropriate for public use. There may also be further lessons that can be learned from this as ACE staff continues to discuss these issues internally. But, for now, this is what we're currently thinking.

Comment by ericherboso on Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things) · 2017-01-13T01:14:52.965Z · score: 5 (5 votes) · EA · GW

Well said, Erika. I'm happy with most of these changes, though I'm sad that we have had to remove the impact calculator in order to ensure others don't get the wrong idea about how seriously such estimates should be taken. Thankfully, Allison plans on implementing a replacement for it at some point using the Guesstimate platform.

For those interested in seeing the exact changes ACE has made to the site, see the disclaimer at the top of the leafleting intervention page and the updates to our mistakes page.

Comment by ericherboso on EAs write about where they give · 2016-12-23T17:37:45.205Z · score: 1 (1 votes) · EA · GW

Animal Charity Evaluators' post like this, for 2016, is here.

Comment by ericherboso on Donor lotteries: demonstration and FAQ · 2016-12-07T19:11:06.547Z · score: 5 (5 votes) · EA · GW

I'd like to contribute $1k. Would you like to coordinate together so we can meet the $5k threshold?

Edit: After further consideration, I decided to instead donate $500 to the donor lottery while increasing my direct donations elsewhere.

Comment by ericherboso on .impact updates 3 of 3: Impact Missions, peer-to-peer fundraisers, matching donations · 2016-11-16T21:28:05.621Z · score: 0 (0 votes) · EA · GW

Please include a question about race. At the Effective Animal Advocacy Symposium this past weekend at Princeton, the 2015 EA Survey was specifically called out for neglecting to ask a question about the race of the respondents.

Comment by ericherboso on Reorganizing EA NTNU into agile self-organizing teams · 2016-11-16T00:15:06.923Z · score: 3 (3 votes) · EA · GW

There're lots of great stuff you guys are doing, but I'd like to comment on one thing in particular: your t-shirts. They look awesome.

I know some EAs think they are low value, but, as an introvert, having a great EA t-shirt helps to initiate conversations with acquaintances when they ask about it. Plus, I imagine it would help build camaraderie between members of any local EA group.

Very cool. (c:

Comment by ericherboso on The 2015 Survey of Effective Altruists: Results and Analysis · 2016-11-12T22:22:00.213Z · score: 1 (1 votes) · EA · GW

At the Effective Animal Advocacy Symposium, Garrett Broad pointed out in his talk that the 2015 Survey of Effective Altruists did not ask about race, which is worrying given how overwhelmingly white the movement is. To my knowledge this makes at least two public critiques of the movement on this specific topic.

He points out that the best way to deal with race issues is not to ignore the issue, but to bring it front and center. Could we please be sure to include a question about race on the 2016 version of this survey?

EDIT: Here's an image. I'll upload a video of his talk once ACE puts the videos of the conference online.

EDIT: The video is here. It's titled "Advocacy for Education" and Garret Broad's section of the talk begins at 33:20.

Comment by ericherboso on What does Trump mean for EA? · 2016-11-11T05:38:32.826Z · score: 14 (14 votes) · EA · GW

A few of us have experience working in politics and could conceivably accomplish some good by being an influencer in Trump's White House. Others of us have the ability to pitch Thiel on stuff. Since Thiel has sway in the Trump transition, this means we could conceivably get an EA or two into positions of influence in the Trump administration.

I'm not sure that it would be a good idea to actually do this, but I'm mentioning it because it doesn't seem outside the realm of possibility to actually do it, and it could plausibly be highly effective. Here are some of the questions we'd need to answer:

  1. If we had an EA inside the Trump administration, would it do more good than if they stayed in their current position instead? This depends partly on what they're currently doing and partly on how likely we estimate they could actually make a difference in policy. My intuition is that if we expect Trump's policies to be very bad, then even a small influence could translate into a large amount of good.
  2. Who would be best suited for this, if we decided we wanted to try it? I'm not sure of what would count as experienced enough to do something like this. There are a few people at Effective Altruism Policy Analytics, and I believe there are a couple of people that have experience with lobbying in DC.
  3. Who would make the pitch to Thiel?
  4. What would the pitch consist of? We'd need to know exactly what parts of EA Thiel cares most about, and then we'd need to stress those aspects.
  5. How likely would Thiel be swayed by such a pitch? If he endorsed Trump because he wanted influence in the administration, then I believe Thiel would be fully on board with this idea. But if he endorsed Trump because he actually believes Trump's positions are good, then I can see where he would hate this idea.
  6. Would Thiel be able to get the EA in a position high enough to actually influence policy? How high up would the position have to be in order for it to be influential? How influential are mid-level staffers?

I don't know the answer to these questions. I don't know if this is even a workable idea. I certainly would hate to convince an EA to drop their current work for this if it doesn't turn out to be an influential position. But it seems possible that this could be a high-value opportunity, so I'm bringing it to everyone here.

Comment by ericherboso on Looking for Wikipedia article writers (topics include many of interest to effective altruists) · 2016-04-17T14:38:55.745Z · score: 1 (1 votes) · EA · GW

For anyone working on pages for EA organizations, keep in mind that (1) you probably shouldn't be an employee of that organization and (2) considerable attention should be included in a criticism section. The ACE page was removed in part because the article was "too positive", and people like me were prohibited from adding substantive critical content to it because of my affiliation with the organization (per their conflict of interest policy).

I would not recommend relisting ACE in particular without a criticism section that cites criticism from several different sources. See the AfD page for ACE and compare to the AfD for 80k Hours for more details. (Note that the 80k Hours article survived deletion by being much less positive.)

Comment by ericherboso on Effective Altruism Merchandise Ideas · 2015-10-21T16:22:45.209Z · score: 3 (3 votes) · EA · GW

In general, I like the idea of having something to wear that promotes discussion. It's especially useful for introverts like myself who have trouble bringing up EA in social contexts, but have no problem responding to questions about my shirt and using that as an introduction to EA.

The shirt given out at EA Global was good, as it just has the term "Effective Altruism" which tends to prompt questions. The shirt GiveWell sells is also excellent, as they are a very well named organization. But "Doing Good Effectively by Using Reason" seems clunky to me. I believe it is too long and it feels more pompous than something shorter like "GiveWell".

Contrary to what you've written, I actually think something like "Optimizing QALYs" might actually be good. It's short, easy to read, doesn't sound pompous, and will definitely prompt a question of what it means in a social situation. That is the kind of shirt that I'd actually wear and find useful.

Other shirts I'd find useful would be for various well-named organizations, like "Animal Charity Evaluators", "Giving What We Can", "Charity Science", etc. These don't even need slogans; they can use their name/logo alone, and I think I'd find the shirt useful.

Comment by ericherboso on The EU is legally obligated to double foreign aid spending · 2015-09-15T20:11:23.638Z · score: 2 (2 votes) · EA · GW

Even if it is not legally enforceable, doesn't the 0.7% of GNP figure act as a sort of schelling point here? If so, it could be used in the same way we currently use the "give 10% of your income" meme, as just an anchoring number to show the ballpark we're interested in and not have it sound like we just made up a number from whole cloth.

Comment by ericherboso on A Defense of Normality · 2015-09-15T16:22:54.401Z · score: 2 (2 votes) · EA · GW

For those interested, there is additional commentary on this issue on the main EA Facebook group and the EA Hangout group.

Comment by ericherboso on A Defense of Normality · 2015-09-15T16:05:26.848Z · score: 0 (0 votes) · EA · GW

I don't believe that this conventional wisdom is wrong. Clearly both past movements and our EA movement have been fueled by such superhuman efforts.

But these movements would have died out if they only allowed superhuman actors. This is a bit of a strawman, but I'm trying to illustrate a point: I know most of us EAs are not superhuman. I know the less efficient of us are considered part of the movement and are not excised by the superhumans. But when I read through old facebook threads on EA, I again and again see a public norm established as needing to be superhuman.

I'm not saying we should abolish superhuman acts, nor that they should be more quiet about it; instead, I'm claiming that when the public face of EA only shows such people, it does EA a disservice. Yes, the superhumans should still do superhuman stuff and dominate the headlines about EA. But when people come to places like this forum or the main EA facebook group, they need to see that other effective altruists are just like them in as many respects as is possible, so that there is as little inferential distance between them-in-the-now and them-as-a-future-EA as we can manage. The more inferential distance, the less likely they are to join EA.

Comment by ericherboso on A Defense of Normality · 2015-09-15T15:52:43.637Z · score: 0 (0 votes) · EA · GW

The problem is that you are claiming that, say, if I increase my donations from 10% to 50%, I will actually turn off no less than four people (on average) who would have donated 10% each! Does that seem obvious to you? It doesn't seem obvious to me, either in this specific example or in other examples with other numbers.

I would like to make clear that I am not making this claim. Your numbers here are correct; I agree that if you increase your donations from 10% to 50%, it does not seem likely that that would turn off no less than four people who would have donated 10% each.

However, I still think my intended claim stands. It is my belief that the people who do less are not as vocal as the people who do more. I do not think the people who do more should instead do less; rather, I think that the people who do less should become more vocal.

This isn't so much a problem with percentage of income donations, which is why I (perhaps incorrectly) said that that paragraph should be offensive to no one. But it is a problem when it comes to inefficient behaviors, like people who have hobbies that actually cost money, or people who don't maximize every moment of their day.

There is an unstated premise here that I should have made explicit. I'm talking only about those individuals who are already doing the maximum that they are going to do. If someone could plausibly be talked into upping their percentage from 10% to 50%, then they probably should. But if they are already donating their maximum, my argument is that they should be more vocal about their contribution level within the community. (Again, I don't think contribution level vocality is an issue; but I do think that normal sleeping/eating/playing pattern vocality is an issue in the EA community.)

You also have to throw in the countervailing effects of people being less encouraged to donate more. If me donating only 10% reduces the likelihood of another 10%er moving up to 50%, then I've just done twice as much harm. Don't forget that since the start of Effective Altruism, the idea of donating large amounts of money has given it plenty of attention and the interest of key individuals.

This is a very good point. But, again, I'm not claiming that those who perform what I'm calling supererogatory actions should be more quiet; I'm merely claiming that the less efficient of us should be more vocal. We are still going to have people in the community who perform superhuman feats (you know who you are), and they are still going to get attention/press and be "looked up to". My claim is that, alongside this, we should also have room for the less efficient of us (which we all agree with), and that the less efficient should be a vocal portion of the EA community, to make the barrier to entry for new EAs feel that much lower (which is the part of my claim that we disagree on).

I am not claiming that we need to donate less or sleep more or spend more money on video games. What I'm instead saying is that, for those of us who are going to spend that money on video games regardless, and those who will sleep 9 hours regardless, and those who just aren't going to donate over 10%, we should not be embarrassed by these things and keep them quiet. If we really aren't going to be more effective in terms of time, money, attention, energy, or whatever, then we can at least create more utility for the cause by being vocal and thereby making it easier for new recruits to come into the fold.

I think that setting a lower-effort norm can have dangerous long term consequences for internal culture. There's something that makes effective altruism different from evangelical religious groups and small political parties and community volunteer groups, and that is the fact that EAs are consistently willing to go above and beyond in having a footprint.

This critique is a strong one, and I don't have a proper reply to it other than that I'm thinking about what you've said. If you're right, then this consideration would overwhelm all of the other arguments I've made in this thread. My suspicion is that you are wrong, but I don't have data to support this beyond my intuition.

And if we want to spread the message to new people, instead of passively relying on being interesting and cool people, it's much more effective to actually go out there and actively build the movement.

We agree on this point. Again, the unstated premise I had was that these people would not be doing more, so they could at least help by being more vocal. But obviously if they instead actually recruited others, that would be far better.

Comment by ericherboso on You Can Now Make Any Event a Fundraiser for Effective Charities · 2015-09-13T18:08:11.205Z · score: 7 (7 votes) · EA · GW

More people need to do this. From what I've seen from others' campaigns, this really is an effective use of time, and has the added benefit of getting a conversation about effective altruism going with friends and family.

As an introvert, it is usually difficult for me to bring up the topic of EA with family, so this is an excellent excuse to not only get donations for the event itself, but also to talk more deeply about EA at family gatherings.

With that said, I haven't actually done this myself yet. Most of the reason for me posting this comment is to motivate me to actually move forward with doing this, and I figure publicly posting this intention will make me more likely to follow through on it.

Comment by ericherboso on EA risks falling into a "meta trap". But we can avoid it. · 2015-08-25T18:48:33.726Z · score: 8 (10 votes) · EA · GW

Keep in mind that there are two senses of the word "meta" that I see used often; Peter is speaking specifically about "working not on [a] cause directly, but instead working on getting more people to work on that cause."

The other sense of "meta", where you're working not on the cause, but instead on figuring out the best interventions for that cause, is not what Peter is talking about here.

While the second sense of "meta" also might be a trap of sorts, since you could conceivably spend all your time/money doing meta-studies and never actually helping individuals, I don't think we're anywhere near that point for any EA causes. Even the most well-researched interventions deserve continued evaluation (such as evaluating the recent Cochrane review on deworming) and, in some cases, the research still requires a lot of work.

Reducing animal suffering is a prime example of this. Helping direct animal charities is important, but I believe it is far more important to continue working on research instead. Consider that in the field of animal welfare most intervention types have yet to be evaluated, and even the most highly regarded interventions come with caveats like “in the absence of strong reasons to believe the effects are negative, we expect the effects to be positive on balance” on corporate outreach, or the even more extreme “no difference found in the total change in consumption of animal products between the two groups”, after which the leafleting intervention is nevertheless recommended. (This is not to say that these interventions are poor; to the contrary, they're the best that have so far been found by ACE. I just think more money should go toward research to either find better interventions or better understand the current top interventions.)

So while Peter might be correct when it comes to meta-work in the sense of recruiting, I don't believe it would be correct in the sense of research.

Comment by ericherboso on Common Misconceptions about Effective Altruism · 2015-03-23T14:51:50.533Z · score: 0 (0 votes) · EA · GW

I think Bernadette is correct. This should be fixed.

Also, immediately afterward it says:

About half were interested in metacharity work, prioritization research, and rationality education…

But the figure in the results and analysis pdf says:

If we redefine meta­charity to also include rationality and cause prioritization, it takes the top slot (with 616 people advocating for at least one of the three).

This is not "about half"; it's 616/813 ≈ 75%. But maybe I'm misinterpreting where these statistics are coming from?

(edit: It's "about half" if you use 616/1146, but the 1146 figure includes more than just EAs. Maybe this was the error?)

Comment by ericherboso on February Open Thread · 2015-02-25T04:38:02.725Z · score: 2 (2 votes) · EA · GW

As a matter of policy, I believe downvotes should indicate that the discussion point is not worth considering -- not that we disagree with the idea. For example: we should downvote spam, nonsense posts, or inappropriately immature posts.

I'm not sure who downvoted you, but I can say that I doubt helping first world people is likely to be more cost-effective than helping people in developing countries. There's just too many people already helping first world people. All the low hanging fruit has already been plucked.

Your link mentions possibilities like nuclear weapon containment or the far-future benefits accruing from funding artists today. The former seems like it would help everyone, not just first-world people. The latter seems... rather difficult to get evidence for.

Sure, we can create just-so stories that provide plausible ways that art/idea funding could effectively help the future. But we have no clear way of testing whether those just-so stories are accurate -- nor even any way to really judge what kind of confidence level we should have for the effectiveness of art funding.

I like art. It's one of my "things". My house has several canvases, dozens of paint brushes, and upwards of 600 books on art. I have a significant other that is an art educator, and promoting art literacy is a big deal in our house. Despite this, I sincerely doubt that funding art is anywhere near as effective at creating utility than conventional EA interventions. Sure, great art can impact generations, and has potential far-future effects, but you can't just fund the greats -- how would you know which to fund in the first place? I just don't see how it can compare to conventional EA ideas.

With that said, I do agree that we should consider first world interventions as a possibility for EA. I just can't think of any first world interventions that could plausibly do a better job than developing world interventions.

Comment by ericherboso on Telofy’s Introduction to Effective Altruism · 2015-01-22T01:16:45.868Z · score: 7 (7 votes) · EA · GW

The sidebar currently links "New to Effective Altruism?" to the EA network page, which isn't the best introduction to EA. This, however, is an excellent introduction. I believe that a link to this page should be added to the sidebar.