How do you decide between upvoting and strong upvoting? 2019-08-25T18:19:11.107Z · score: 11 (5 votes)
Explaining the Open Philanthropy Project's $17.5m bet on Sherlock Biosciences’ Innovations in Viral Diagnostics 2019-06-11T17:23:37.349Z · score: 25 (9 votes)
The case for taking AI seriously as a threat to humanity 2018-12-23T01:00:08.314Z · score: 18 (9 votes)
Pandemic Risk: How Large are the Expected Losses? Fan, Jamison, & Summers (2018) 2018-11-21T15:58:31.856Z · score: 22 (10 votes)


Comment by anonymous_ea on How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs · 2020-09-05T23:58:39.024Z · score: 19 (9 votes) · EA · GW

I learned from stories 1 and 2 - thanks for the information!

Story 3 feels like it suffers from lack of familiarity with EA and argues against a straw version. E.g you write (emphasis added):

As the community grew it spread into new areas – Animal Charity Evaluators was founded in 2012 looking at animal welfare – the community also connected to the rationalist community that was worried about AI and to academics at FHI thinking about the long term future. Throughout all of this expected value calculations remained the gold star for making decisions on how to do good. The idea was to shut up and multiply. Even as effective altruism decision makers spread into areas of greater and greater uncertainty they (as far as I can tell) have mostly continued to use the same decision making tools (expected value calculations), without questioning if these were the best tools.

By 2011 GiveWell had already published Why we can’t take expected value estimates literally (even when they’re unbiased), arguing against, well, taking expected value calculations literally, critiquing GWWC's work on that basis, and discussing how their solution avoided Pascal's Mugging. There was a healthy discussion in the comments and the cross-post on LessWrong got 100 upvotes and 250 comments.

Comment by anonymous_ea on KevinO's Shortform · 2019-12-06T19:09:58.021Z · score: 3 (3 votes) · EA · GW

I just voted for the GFI, AMF, and GD videos because of your comment!

Comment by anonymous_ea on I'm Michelle Hutchinson, head of advising at 80,000 Hours, AMA · 2019-12-05T17:38:45.605Z · score: 5 (3 votes) · EA · GW

Even if that's not what edoard meant, I would be interested in hearing the answer to 'what are things you would say if you didn't need to be risk averse?'!

Comment by anonymous_ea on [Link] A new charity evaluator (NYTimes) · 2019-11-27T20:14:20.562Z · score: 4 (3 votes) · EA · GW

I hope ImpactMatters does well!

Comment by anonymous_ea on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-21T18:32:47.698Z · score: 27 (15 votes) · EA · GW

Meta: A big thank you to Buck for doing this and putting so much effort into it! This was very interesting and will hopefully encourage more dissemination of knowledge and opinions publicly

Comment by anonymous_ea on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-20T01:48:37.339Z · score: 3 (3 votes) · EA · GW

I agree with Issa about the costs of not giving reasons. My guess is that over the long run, giving reasons why you believe what you believe will be a better strategy to avoid convincing people of false things. Saying you believed X and now believe ~X seems like it's likely to convince people of ~X even more strongly.

Comment by anonymous_ea on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-16T21:59:58.824Z · score: 11 (6 votes) · EA · GW

What other crazy ideas do you have about EA outreach?

Comment by anonymous_ea on Summary of Core Feedback Collected by CEA in Spring/Summer 2019 · 2019-11-12T18:58:13.248Z · score: 17 (8 votes) · EA · GW
I think there may be a misunderstanding – the title of this post is “Feedback Collected by CEA”, not “for” CEA.

This is fair, but I want to give some examples of why I thought this document was about feedback about CEA, with the hope of helping with communication around this in the future. Even after your clarification, the document still gives a strong impression to me of the feedback being about CEA, rather than about the community in general. Below are some quotes that make it sound that way to me, with emphasis added:

Summary of Core Feedback Collected by CEA in Spring/Summer 2019

The title doesn't mention what the feedback is about. I think most people would assume that it refers to feedback about CEA, rather than the community overall. That's what I assumed.

CEA collects feedback from community members in a variety of ways (see “CEA’s Feedback Process” below). In the spring and summer of 2019, we reached out to about a dozen people who work in senior positions in EA-aligned organizations to solicit their feedback. We were particularly interested to get their take on execution, communication, and branding issues in EA. Despite this focus, the interviews were open-ended and tended to cover the areas each person felt was important.
This document is a summary of their feedback. The feedback is presented “as is,” without any endorsement by CEA.

It's not clearly stated what the feedback is about ("CEA collects feedback", "solicit their feedback" without elaboration of what the feedback is about). The closest it gets to specifying what feedback might pertain to is when it's mentioned that CEA was particularly interested in feedback on execution, communication, and branding issues in EA. This is still fairly vague, and "branding" to me implies that the feedback is about CEA. It does say "...issues in EA", but I didn't pay that much importance.

This post is the first in a series of upcoming posts where we aim to share summaries of the feedback we have received.

In general, I assume that feedback to an organization is about the organization itself.

CEA has, historically, been much better at collecting feedback than at publishing the results of what we collect.

While unclear again about what "feedback" refers to, in general I would expect this to mean feedback about CEA.

As some examples of other sources of feedback CEA has collected this year:
We have received about 2,000 questions, comments and suggestions via Intercom (a chat widget on many of CEA’s websites) so far this year
We hosted a group leaders retreat (27 attendees), a community builders retreat (33 attendees), and had calls with organizers from 20 EA groups asking about what’s currently going on in their groups and how CEA can be helpful
Calls with 18 of our most prolific EA Forum users, to ask how the Forum can be made better.
A “medium-term events” survey, where we asked everyone who had attended an Individual Outreach retreat how the retreat impacted them 6-12 months later. (53 responses)
EA Global has an advisory board of ~25 people who are asked for opinions about content, conference size, format, etc., and we receive 200-400 responses to the EA Global survey from attendees each time.

All of these are examples of feedback about CEA or its events and activities. There are no examples of feedback about the community.

I think the confusion comes from the lack of clear elaboration in the title and/or beginning of the document of what the scope of the feedback was. Clarifying this in the future should eliminate this problem.

Comment by anonymous_ea on Summary of Core Feedback Collected by CEA in Spring/Summer 2019 · 2019-11-09T21:13:14.894Z · score: 2 (2 votes) · EA · GW

Note: The comment you and Ben replied to seems to have disappeared

Comment by anonymous_ea on Forum update: New features (November 2019) · 2019-11-09T21:10:56.607Z · score: 3 (4 votes) · EA · GW

I'm really excited about subscribing and bookmarking! Pingbacks also seem useful

Comment by anonymous_ea on Effective Altruism and International Trade · 2019-11-07T20:56:27.860Z · score: 4 (3 votes) · EA · GW
EA headlining money and health as a cause priority while dropping education. + spending no money on education is straight out saying a lot about the priorities of EA.
EA gives zero value to education, and that is fundamentally wrong.

I don't think the last sentence follows from the ones before it. EA is fundamentally about doing the most good possible, not about doing good in every area that is valuable. EA will (hopefully) always be about focusing on the relatively few areas where we can do the most good. Not funding almost everything in the world doesn't mean that EA thinks that almost everything in the world has zero value. It is true that education for the sake of education is not a priority for EAs, but it doesn't mean that EAs think that education isn't important. In fact EA is very disproportionately composed of highly educated people - presumably at least some of these people value education highly.

Comment by anonymous_ea on Opinion: Estimating Invertebrate Sentience · 2019-11-07T20:44:00.030Z · score: 32 (14 votes) · EA · GW

I've been impressed by the work being produced by Rethink Priorities over the past several months. I appreciate the thought and nuance that went into this. Great job again!

Comment by anonymous_ea on Summary of Core Feedback Collected by CEA in Spring/Summer 2019 · 2019-11-07T20:32:51.485Z · score: 18 (9 votes) · EA · GW

I want to echo this. I would love to see CEA talk more about what they see as their mistakes and achievements, but this felt like a confusing mixture of feedback about some aspects of CEA (mostly EA Global, EA Forum, and the Community Health team) and some general feedback about the EA community that CEA only has partial control over. While CEA occupies an important position in EA, there are many factors beyond CEA that contribute to whether EA community members are smart and thoughtful or whether they're not welcoming enough.

Comment by anonymous_ea on Formalizing the cause prioritization framework · 2019-11-06T17:37:31.826Z · score: 1 (1 votes) · EA · GW

Update: The pictures load for me now

Comment by anonymous_ea on Formalizing the cause prioritization framework · 2019-11-05T20:59:00.910Z · score: 3 (2 votes) · EA · GW

None of the images display for me either. This is what it looks like for me:

Let's see how this works graphically. First, we start with tractability as a function of dollars (crowdedness), as in Figure 1. With diminishing marginal returns, "% solved/$" is decreasing in resources.

Next, we multiply tractability by importance to obtain MU/$ as a function of resources, in Figure 2. Assuming that Importance = "utility gained/% solved" is a constant[2], all this does is change the units on the y-axis, since we're multiplying a function by a constant.

Now we can clearly see the amount of good done for an additional dollar, for every level of resources invested. To decide whether we should invest more in a cause, we calculate the current level of resources invested, then evaluate the MU/$ function at that level of resources. We do this for all causes, and allocate resources to the highest MU/$ causes, ultimately equalizing MU/$ across all causes as diminishing returns take effect. (Note the similarity to the utility maximization problem from intermediate microeconomics, where you choose consumption of goods to maximize utility, given their prices and subject to a budget constraint.)

Comment by anonymous_ea on EA Forum Prize: Winners for September 2019 · 2019-11-04T23:47:39.936Z · score: 13 (7 votes) · EA · GW

It might be good to have a small number of runner of up posts without cash prizes. That would certainly help motivate me to post more.

Comment by anonymous_ea on What are your top papers of the 2010s? · 2019-10-23T18:25:28.184Z · score: 2 (2 votes) · EA · GW

Can you expand on how this influenced you?

Comment by anonymous_ea on Ineffective Altruism: Are there ideologies which generally cause there adherents to have worse impacts? · 2019-10-18T19:31:50.047Z · score: 1 (1 votes) · EA · GW

While I think that was a valuable post, the definition of ideology in it is so broad that even things like science and the study of climate change would be ideologies (as kbog points out in the comments). I'm not sure what system or way of thinking wouldn't qualify as an ideology based on the definition used.

Comment by anonymous_ea on Long-Term Future Fund: August 2019 grant recommendations · 2019-10-11T01:26:07.557Z · score: 7 (2 votes) · EA · GW

Datapoint for Hauke: I also am very interested in this topic and Hauke's thoughts on it but found the formatting made it difficult for me to read it fully

Comment by anonymous_ea on What actions would obviously decrease x-risk? · 2019-10-08T23:05:56.829Z · score: 16 (8 votes) · EA · GW

A general comment about this thread rather than a reply to Khorton in particular: The original post didn't suggest that this should be a brainstorming thread, and I didn't interpret it like that. I interpreted it as a question looking for answers that the posters believe, rather than only hypothesis generation/brainstorming.

Comment by anonymous_ea on Long-Term Future Fund: August 2019 grant recommendations · 2019-10-08T03:16:57.646Z · score: 21 (10 votes) · EA · GW

I'm sorry to see the strong downvotes, especially when you've put in more effort on explaining your thinking and genuinely engaging with critiques than perhaps than all other EA Fund granters put together. I want you to know that I found your explanations very helpful and thought provoking, and really like how you've engaged with criticisms both in this thread and the last one.

Comment by anonymous_ea on EA Meta Fund and Long-Term Future Fund are looking for applications again until October 11th · 2019-09-20T04:57:06.668Z · score: 10 (6 votes) · EA · GW
We will likely not be able to make the following types of grants:
Self-development that is not directly related to community benefit
In order to make grants the public benefit needs to be greater than the private benefit to any individual. So we cannot make grants that focus on helping a single individual in a way that isn’t directly connected to public benefit.

Is this in response to some of the criticisms of the April 2019 Long Term Future grants in this thread or elsewhere?

Comment by anonymous_ea on Who would you like to see do an AMA on this forum? · 2019-08-27T17:46:48.467Z · score: 10 (3 votes) · EA · GW

I'm a single person using a single anonymous account :) Other anonymous accounts use other names.

Comment by anonymous_ea on Who would you like to see do an AMA on this forum? · 2019-08-26T16:00:29.572Z · score: 11 (5 votes) · EA · GW

Are people with disabilities or (especially) people who are intersex or non-binary under-represented in EA? My intuition is that there may be a greater proportion of them in EA than in the general population, but I haven't checked this. Or did you mean that their opinions, experiences, and perspectives might not be very visible?

Comment by anonymous_ea on Who would you like to see do an AMA on this forum? · 2019-08-26T05:56:01.773Z · score: 15 (5 votes) · EA · GW

A list of people I can think of right now whose AMA I would at least consider asking a question on (non-exhaustive):

General categories: Employees at EA orgs, people with deep domain expertise in specific areas, EAs who've spent a long time researching a particular topic, EAs who want to do an AMA, non-EAs who want to do an AMA in good faith

Specific people: Toby Ord, Nick Beckstead, Nick Bostrom, Rob Wiblin, Ben Todd, Holden Karnofsky, Elie Hassenfeld, Kelsey Piper, Dylan Matthews, Nate Soares, Oliver Habryka, Julia Wise, Jeff Kaufman, Buck Shlegeris, Claire Zabel, Khorton, Larks, Jason Methany, Eric Drexler, Rachel Glennester, Michael Kremer, Peter Singer, Michelle Hutchinson, Holly Elmore, Kit Harris, kbog, Phil Trammell, Peter Hurford, Ozzie Gooen, Hilary Greaves, Julia Galef, Anna Salamon, Carl Shulman, Hauke Hillebrandt, Brian Tomasik, Luke Muehlhauser, Helen Toner, Scott Alexander, Simon Beard, Kaj Sotala, Tom Sittler, etc

Comment by anonymous_ea on Ask Me Anything! · 2019-08-19T19:09:30.550Z · score: 4 (7 votes) · EA · GW

I don't understand why this question is downvoted with so many votes. It seems like a reasonable, if underspecified, question to me.

Edit: When I commented, this comment was at -2 with perhaps 15 votes.

Comment by anonymous_ea on Ask Me Anything! · 2019-08-19T19:05:26.842Z · score: 28 (13 votes) · EA · GW

I'd be super interested in hearing you elaborate more on most of the points! Especially the first two.

Comment by anonymous_ea on Ask Me Anything! · 2019-08-16T17:44:24.776Z · score: 7 (9 votes) · EA · GW

Also, what can normal EAs do about it?

Comment by anonymous_ea on Impact Report for Effective Altruism Coaching · 2019-08-15T18:07:36.750Z · score: 2 (2 votes) · EA · GW

I'm surprised that business expenses are 40% of revenue. I thought it would be a lot lower than that. Are you comfortable sharing what the biggest expenses are?

Comment by anonymous_ea on Ask Me Anything! · 2019-08-15T18:03:42.280Z · score: 10 (8 votes) · EA · GW

How do you decide your own cause prioritization? Relatedly, how do you decide where to donate to?

Comment by anonymous_ea on Ask Me Anything! · 2019-08-15T18:03:06.756Z · score: 21 (11 votes) · EA · GW

What do you think are the things or ideas that most casual EAs don't know much about or appreciate enough, but are (deservedly or undeservedly) very influential in EA hubs or organizations like CEA, 80K, GPI, etc? Some candidates I have in mind for this are things like cluelessness, longtermism, the possibility of short AI timelines, etc.

Comment by anonymous_ea on Ask Me Anything! · 2019-08-15T18:00:28.477Z · score: 23 (12 votes) · EA · GW

If you had the option of making a small change to EA by pressing a button, would you do it? If so, what would it be? What about a big change?

Comment by anonymous_ea on Ask Me Anything! · 2019-08-15T17:59:21.384Z · score: 13 (9 votes) · EA · GW

What do you see as the best longterm path for EA? Should we try to stay small and weird, or try to get buy-in from the masses? How important is academic influence for the long term success of EA?

Comment by anonymous_ea on Ask Me Anything! · 2019-08-15T17:57:56.080Z · score: 22 (17 votes) · EA · GW

Is there a question you want to answer that hasn't been asked yet? What's your answer to it?

Comment by anonymous_ea on Is running Folding@home / Rosetta@home beneficial? · 2019-08-01T17:00:10.099Z · score: 7 (4 votes) · EA · GW

I agree that the impact of this decision is likely to be very small, but trying to analyze a complicated phenomenon can be personally beneficial for improving your skills at analyzing the impact of other phenomenon. In general, it seems good for EAs to practice analyzing the impact of various interventions, as long as they keep in mind that the impact of the intervention and the direct value of the analysis might be small.

Comment by anonymous_ea on The EA Forum is a News Feed · 2019-08-01T16:56:13.129Z · score: 1 (1 votes) · EA · GW

As a data point, I would commit to tagging old posts for at least 1 hour if other people were also doing it or expressed interest in it happening.

Comment by anonymous_ea on Is running Folding@home / Rosetta@home beneficial? · 2019-07-31T15:24:40.068Z · score: 6 (5 votes) · EA · GW

Since this post has gotten very little traction, I wanted to let you (orenmn) know that at least I found it valuable and interesting!

Comment by anonymous_ea on Feedback available for EA Forum drafts · 2019-07-31T15:11:36.677Z · score: 1 (1 votes) · EA · GW

Thank you! I'll send you a link if/when I get around to working on the draft :)

Comment by anonymous_ea on Feedback available for EA Forum drafts · 2019-07-30T16:50:31.823Z · score: 1 (1 votes) · EA · GW

Thanks for offering this service! I'd like to share a draft/idea with you (the second example in my comment here) but I don't want to use my personal email since I want to keep my account anonymous. Is there a way I could get feedback from you without creating an anonymous email?

Comment by anonymous_ea on Four practices where EAs ought to course-correct · 2019-07-30T16:45:36.742Z · score: 3 (2 votes) · EA · GW

There's an incorrect link in this sentence:

This post suggested the rather alarming idea that EA's growth is petering out in a sort of logistic curve.

The link goes to Noah Smith's blog post advocating the two paper rule.

Comment by anonymous_ea on EA Forum Prize: Winners for June 2019 · 2019-07-26T16:22:30.677Z · score: 4 (3 votes) · EA · GW

Is there any data on how prize winners generally feel about winning? Does the prize help motivate them to either write the material or post it here?

Comment by anonymous_ea on EA Forum Prize: Winners for June 2019 · 2019-07-26T16:21:36.895Z · score: 15 (7 votes) · EA · GW

I like the idea of having separate categories for professional work and amateur/some other categorization work. I'd still like to encourage the professional work to be posted here, but encouraging non-professional work is also important.

Comment by anonymous_ea on In what ways and in what areas might it make sense for EA to adopt more a more bottoms-up approach? · 2019-07-25T16:26:51.192Z · score: 6 (4 votes) · EA · GW

Your link to Anna Salamon's comment goes to the Wikipedia page for sealioning :)

Comment by anonymous_ea on What posts you are planning on writing? · 2019-07-25T15:19:47.828Z · score: 5 (4 votes) · EA · GW

I just skimmed some of the recent posts on your website and liked them! What makes you think that they're not good enough to be posted here? They definitely seem less comprehensive than some of your (very comprehensive) posts here, but still more than good enough to post here.

Comment by anonymous_ea on What posts you are planning on writing? · 2019-07-25T15:05:31.098Z · score: 1 (1 votes) · EA · GW

Thank you! Do you happen to have any advice or feedback on how I'm planning to write it? I'm tempted to make it fairly short and open it up for other people to comment on with their own experiences, but I'm worried that a short, feelings focused post won't get a lot of engagement. Trying to make it more comprehensive by e.g. compiling some of the ways the EA community ends up signaling that it really wants highly talented people (and almost never signals the opposite) might make it more engaging, but would also decrease the likelihood that I'll publish this anytime soon.

I could also pose it slightly differently as a question post on how people feel about their place in the community.

Comment by anonymous_ea on What posts you are planning on writing? · 2019-07-25T14:58:04.655Z · score: 6 (4 votes) · EA · GW

Interesting. I hadn't heard of sealioning before. You're right about the thing I'm pointing to being somewhat different. I think EAs want to encourage good criticisms of EA and want to be the kind of people and movement where criticisms are received positively. I think this often leads to EAs being overly generous with criticism posts on an object level, although I don't know whether this is positive or negative on aggregate.

Comment by anonymous_ea on What posts you are planning on writing? · 2019-07-24T21:40:05.486Z · score: 14 (7 votes) · EA · GW

I have two drafts saved with only a few links or a couple of paragraphs written:

1. How do we respond to criticism of EA on the forum?

Several commentators on the forum have recently casually expressed theories of how effective altruists respond to criticism of EA on the forum. Some have expressed skepticism of the idea that EAs can respond positively to criticism of EA. I aim to look at several notable comments and posts on the forum over at least the past several months to see how criticism is practically received on the forum.

My tentative theory, without having properly researched this, is that EAs are generally too eager to read and upvote any nicely written criticism by an intelligent person that sounds non-threatening enough. Criticism of this sort, while often praised, is often not deeply engaged with. On the rare occasion that criticism seems threatening enough to EA, there's deeper engagement with the actual arguments, rather than responses mostly trying to signal-boosting the criticism. There's also one instance of a threatening criticism on a particularly political topic that attracted significantly lower quality comments in my opinion.

The posts I've casually collected so far are:

Benjamin Hoffman's Drowning Children are Rare

Jeff Kaufman's There's Lots More To Do

beth's Three Biases That Made Me Believe in AI Risk

Fods12' Effective Altruism is an Ideology, not (just) a Question

EAs for Inclusion's Making discussions in EA groups inclusive

Jessica Taylor's The AI Timelines Scam (maybe?)

Jessica Taylor's The Act of Charity

Benjamin Hoffman's Effective Altruism is Self-recommending

Alexander Guzey's William MacAskill misrepresents much of the evidence underlying his key arguments in "Doing Good Better"

Chris Smith's The Optimizer's Curse & Wrong-Way Reductions

Milan Griffes' Cash prizes for the best arguments against psychedelics being an EA cause area (maybe?)

2. Will I be accepted in EA if I'm not prodigiously successful professionally?

The EA community contains a tremendous amount of extremely talented and accomplished people. I worry that unless I also achieve a lot of professional success, other EAs won't particularly respect me, like me, or particularly want to interact with me. While some of this is definitely related to my own issues about social acceptance, I think there's a decent chance that many other people also feel this way. My aim is to explore my feelings and what about EA makes me feel this way, and encourage others to express how they feel about their place in the community as well. At a meta-level, I hope to at least explore how a different, more feelings focused article might fit in this forum. I don't want to give any specific solutions, imply that this is a problem of any particular magnitude, or even imply that this is necessarily a problem on net for EA.

Comment by anonymous_ea on There's Lots More To Do · 2019-07-18T19:21:09.707Z · score: 8 (6 votes) · EA · GW

I don't feel inclined to get into this, but FWIW I have read a reasonable amount of Ben's writings on both EA and non-EA topics, and I do not find it obvious that his main, subconscious motivation is epistemic health rather than a need to reject EA.

Comment by anonymous_ea on Debrief: "cash prizes for the best arguments against psychedelics" · 2019-07-15T06:07:46.113Z · score: 4 (3 votes) · EA · GW

What did you find compelling about the comment that you found to be the best argument?

Comment by anonymous_ea on Advice for an Undergrad · 2019-07-03T16:59:22.576Z · score: 3 (2 votes) · EA · GW
[I] figure two years is enough time to major in pretty much anything outside of the sciences or engineering. My current plan is to double major in Philosophy and Math (with an applied bent)

Have you taken any Math classes before? Starting and finishing a Math major in 2 years sounds unrealistic to me.