Apology 2019-03-22T23:05:41.142Z
2018 list of half-baked volunteer research ideas 2018-09-19T07:58:34.213Z
Why I prioritize moral circle expansion over artificial intelligence alignment 2018-02-20T18:29:12.819Z


Comment by Jacy_Reese on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-27T02:07:12.575Z · EA · GW

Same perspective here! Thank you for sharing.

Comment by Jacy_Reese on The expected value of extinction risk reduction is positive · 2018-12-30T20:36:53.614Z · EA · GW

Oh, sorry, I was thinking of the arguments in my post, not (only) those in your post. I should have been more precise in my wording.

Comment by Jacy_Reese on The expected value of extinction risk reduction is positive · 2018-12-27T00:22:33.784Z · EA · GW

Thank you for the reply, Jan, especially noting those additional arguments. I worry that your article neglects them in favor of less important/controversial questions on this topic. I see many EAs taking the "very unlikely that [human descendants] would see value exactly where we see disvalue" argument (I'd call this the 'will argument,' that the future might be dominated by human-descendant will and there is much more will to create happiness than suffering, especially in terms of the likelihood of hedonium over dolorium) and using that to justify a very heavy focus on reducing extinction risk, without exploration of those many other arguments. I worry that much of the Oxford/SF-based EA community has committed hard to reducing extinction risk without exploring those other arguments.

It'd be great if at some point you could write up discussion of those other arguments, since I think that's where the thrust of the disagreement is between people who think the far future is highly positive, close to zero, and highly negative. Though unfortunately, it always ends up coming down to highly intuitive judgment calls on these macro-socio-technological questions. As I mentioned in that post, my guess is that long-term empirical study like the research in The Age of Em or done at Sentience Institute is our best way of improving those highly intuitive judgment calls and finally reaching agreement on the topic.

Comment by Jacy_Reese on The expected value of extinction risk reduction is positive · 2018-12-16T23:19:27.145Z · EA · GW

Thanks for posting on this important topic. You might be interested in this EA Forum post where I outlined many arguments against your conclusion, the expected value of extinction risk reduction being (highly) positive.

I do think your "very unlikely that [human descendants] would see value exactly where we see disvalue" argument is a viable one, but I think it's just one of many considerations, and my current impression of the evidence is that it's outweighed.

Also FYI the link in your article to "moral circle expansion" is dead. We work on that approach at Sentience Institute if you're interested.

Comment by Jacy_Reese on Why I'm focusing on invertebrate sentience · 2018-12-09T15:43:51.806Z · EA · GW

I remain skeptical of how much this type of research will influence EA-minded decisions, e.g. how many people would switch donations from farmed animal welfare campaigns to humane insecticide campaigns if they increased their estimate of insect sentience by 50%? But I still think the EA community should be allocating substantially more resources to it than they are now, and you seem to be approaching it in a smart way, so I hope you get funding!

I'm especially excited about the impact of this research on general concern for invertebrate sentience (e.g. establishing norms that there are at least some smart humans are actively working on insect welfare policy) and on helping humans better consider artificial sentience when important tech policy decisions are made (e.g. on AI ethics).

Comment by Jacy_Reese on 2018 list of half-baked volunteer research ideas · 2018-09-19T08:00:28.803Z · EA · GW

[1] Cochrane mass media health articles (and similar):

  • Targeted mass media interventions promoting healthy behaviours to reduce risk of non-communicable diseases in adult, ethnic minorities
  • Mass media interventions for smoking cessation in adults
  • Mass media interventions for preventing smoking in young people.
  • Mass media interventions for promoting HIV testing
  • Smoking cessation media campaigns and their effectiveness among socioeconomically advantaged and disadvantaged populations
  • Population tobacco control interventions and their effects on social inequalities in smoking: systematic review
  • Are physical activity interventions equally effective in adolescents of low and high socioeconomic status (SES): results from the European Teenage project
  • The effectiveness of nutrition interventions on dietary outcomes by relative social disadvantage: a systematic review
  • Use of folic acid supplements, particularly by low-income and young women: a series of systematic reviews to inform public health policy in the UK
  • Use of mass media campaigns to change health behaviour
  • The role of the media in promoting and reducing tobacco use
  • Getting to the Truth: Evaluating National Tobacco Countermarketing Campaigns
  • Effect of televised, tobacco company-funded smoking prevention advertising on youth smoking-related beliefs, intentions, and behavior
  • Do mass media campaigns improve physical activity? a systematic review and meta-analysis
Comment by Jacy_Reese on Which piece got you more involved in EA? · 2018-09-15T09:37:23.723Z · EA · GW

I can't think of anything that isn't available in a better form now, but it might be interesting to read for historical perspective, such as what it looks like to have key EA ideas half-formed. This post on career advice is a classic. Or this post on promoting Buddhism as diluted utilitarianism, which is similar to the reasoning a lot of utilitarians had for building/promoting EA.

Comment by Jacy_Reese on Which piece got you more involved in EA? · 2018-09-07T15:33:46.339Z · EA · GW

The content on was most important in my first involvement, though that website isn't active anymore. I feel like forum content (similar to what could be on the EA Forum!) was important because it's casually written and welcoming. Everyone was working together on the same problems and ideas, so I felt eager to join.

Comment by Jacy_Reese on Leverage Research: reviewing the basic facts · 2018-08-04T07:22:29.832Z · EA · GW

Just to add a bit of info: I helped with THINK when I was a college student. It wasn't the most effective strategy (largely, it was founded before we knew people would coalesce so strongly into the EA identity, and we didn't predict that), but Leverage's involvement with it was professional and thoughtful. I didn't get any vibes of cultishness from my time with THINK, though I did find Connection Theory a bit weird and not very useful when I learned about it.

Comment by Jacy_Reese on Excerpt from 'Doing Good Better': How Vegetarianism Decreases Animal Product Supply · 2018-04-18T13:30:30.477Z · EA · GW

I get it pretty frequently from newcomers (maybe in the top 20 questions for animal-focused EA?), but everyone seems convinced by a brief explanation of how there's still a small chance of big purchasing changes even if every small consumption change doesn't always lead to a purchasing change.

Comment by Jacy_Reese on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-27T17:36:12.463Z · EA · GW

Exactly. Let me know if this doesn't resolve things, zdgroff.

Comment by Jacy_Reese on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-27T17:35:03.070Z · EA · GW

Yes, terraforming is a big way in which close-to-WAS scenarios could arise. I do think it's smaller in expectation than digital environments that develop on their own and thus are close-to-WAS.

I don't think terraforming would be done very differently than today's wildlife, e.g. done without predation and diseases.

Ultimately I still think the digital, not-close-to-WAS scenarios seem much larger in expectation.

Comment by Jacy_Reese on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-27T17:32:07.211Z · EA · GW

I'd qualify this by adding that the philosophical-type reflection seems to lead in expectation to more moral value (positive or negative, e.g. hedonium or dolorium) than other forces, despite overall having less influence than those other forces.

Comment by Jacy_Reese on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-27T17:29:35.668Z · EA · GW

Thanks for commenting, Lukas. I think Lukas, Brian Tomasik, and others affiliated with FRI have thought more about this, and I basically defer to their views here, especially because I haven't heard any reasonable people disagree with this particular point. Namely, I agree with Lukas that there seems to be an inevitable tradeoff here.

Comment by Jacy_Reese on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-23T16:20:42.917Z · EA · GW

I just took it as an assumption in this post that we're focusing on the far future, since I think basically all the theoretical arguments for/against that have been made elsewhere. Here's a good article on it. I personally mostly focus on the far future, though not overwhelmingly so. I'm at something like 80% far future, 20% near-term considerations for my cause prioritization decisions.

This may take a few decades, but social change might take even longer.

To clarify, the post isn't talking about ending factory farming. And I don't think anyone in the EA community thinks we should try to end factory farming without technology as an important component. Though I think there are good reasons for EAs to focus on the social change component, e.g. there is less for-profit interest in that component (most of the tech money is from for-profit companies, so it's less neglected in this sense).

Comment by Jacy_Reese on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-22T19:02:31.546Z · EA · GW

Hm, yeah, I don't think I fully understand you here either, and this seems somewhat different than what we discussed via email.

My concern is with (2) in your list. "[T]hey do not wish to be convinced to expand their moral circle" is extremely ambiguous to me. Presumably you mean they -- without MCE advocacy being done -- wouldn't put in wide-MC* values or values that lead to wide-MC into an aligned AI. But I think it's being conflated with, "they actively oppose" or "they would answer 'no' if asked, 'Do you think your values are wrong when it comes to which moral beings deserve moral consideration?'"

I think they don't actively oppose it, they would mostly answer "no" to that question, and it's very uncertain if they will put the wide-MC-leading values into an aligned AI. I don't think CEV or similar reflection processes reliably lead to wide moral circles. I think they can still be heavily influenced by their initial set-up (e.g. what the values of humanity when reflection begins).

This leads me to think that you only need (2) to be true in a very weak sense for MCE to matter. I think it's quite plausible that this is the case.

*Wide-MC meaning an extremely wide moral circle, e.g. includes insects, small/weird digital minds.

Comment by Jacy_Reese on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-22T14:53:57.524Z · EA · GW

I personally don't think WAS is as similar to the most plausible far future dystopias, so I've been prioritizing it less even over just the past couple of years. I don't expect far future dystopias to involve as much naturogenic (nature-caused) suffering, though of course it's possible (e.g. if humans create large numbers of sentient beings in a simulation, but then let the simulation run on its own for a while, then the simulation could come to be viewed as naturogenic-ish and those attitudes could become more relevant).

I think if one wants something very neglected, digital sentience advocacy is basically across-the-board better than WAS advocacy.

That being said, I'm highly uncertain here and these reasons aren't overwhelming (e.g. WAS advocacy pushes on more than just the "care about naturogenic suffering" lever), so I think WAS advocacy is still, in Gregory's words, an important part of the 'far future portfolio.' And often one can work on it while working on other things, e.g. I think Animal Charity Evaluators' WAS content (e.g. ]guest blog post by Oscar Horta]( has helped them be more well-rounded as an organization, and didn't directly trade off with their farmed animal content.

Comment by Jacy_Reese on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-21T23:47:25.384Z · EA · GW

Those considerations make sense. I don't have much more to add for/against than what I said in the post.

On the comparison between different MCE strategies, I'm pretty uncertain which are best. The main reasons I currently favor farmed animal advocacy over your examples (global poverty, environmentalism, and companion animals) are that (1) farmed animal advocacy is far more neglected, (2) farmed animal advocacy is far more similar to potential far future dystopias, mainly just because it involves vast numbers of sentient beings who are largely ignored by most of society. I'm not relatively very worried about, for example, far future dystopias where dog-and-cat-like-beings (e.g. small, entertaining AIs kept around for companionship) are suffering in vast numbers. And environmentalism is typically advocating for non-sentient beings, which I think is quite different than MCE for sentient beings.

I think the better competitors to farmed animal advocacy are advocating broadly for antispeciesism/fundamental rights (e.g. Nonhuman Rights Project) and advocating specifically for digital sentience (e.g. a larger, more sophisticated version of People for the Ethical Treatment of Reinforcement Learners). There are good arguments against these, however, such as that it would be quite difficult for an eager EA to get much traction with a new digital sentience nonprofit. (We considered founding Sentience Institute with a focus on digital sentience. This was a big reason we didn't.) Whereas given the current excitement in the farmed animal space (e.g. the coming release of "clean meat," real meat grown without animal slaughter), the farmed animal space seems like a fantastic place for gaining traction.

I'm currently not very excited about "Start a petting zoo at Deepmind" (or similar direct outreach strategies) because it seems like it would produce a ton of backlash because it seems too adversarial and aggressive. There are additional considerations for/against (e.g. I worry that it'd be difficult to push a niche demographic like AI researchers very far away from the rest of society, at least the rest of their social circles; I also have the same traction concern I have with advocating for digital sentience), but this one just seems quite damning.

The upshot is that, even if there are some particularly high yield interventions in animal welfare from the far future perspective, this should be fairly far removed from typical EAA activity directed towards having the greatest near-term impact on animals. If this post heralds a pivot of Sentience Institute to directions pretty orthogonal to the principal component of effective animal advocacy, this would be welcome indeed.

I agree this is a valid argument, but given the other arguments (e.g. those above), I still think it's usually right for EAAs to focus on farmed animal advocacy, including Sentience Institute at least for the next year or two.

(FYI for readers, Gregory and I also discussed these things before the post was published when he gave feedback on the draft. So our comments might seem a little rehearsed.)

Comment by Jacy_Reese on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-21T22:09:37.908Z · EA · GW

Thanks! That's very kind of you.

I'm pretty uncertain about the best levers, and I think research can help a lot with that. Tentatively, I do think that MCE ends up aligning fairly well with conventional EAA (perhaps it should be unsurprising that the most important levers to push on for near-term values are also most important for long-term values, though it depends on how narrowly you're drawing the lines).

A few exceptions to that:

  • Digital sentience probably matters the most in the long run. There are good reasons to be skeptical we should be advocating for this now (e.g. it's quite outside of the mainstream so it might be hard to actually get attention and change minds; it'd probably be hard to get funding for this sort of advocacy (indeed that's one big reason SI started with farmed animal advocacy)), but I'm pretty compelled by the general claim, "If you think X value is what matters most in the long-term, your default approach should be working on X directly." Advocating for digital sentience is of course neglected territory, but Sentience Institute, the Nonhuman Rights Project, and Animal Ethics have all worked on it. People for the Ethical Treatment of Reinforcement Learners has been the only dedicated organization AFAIK, and I'm not sure what their status is or if they've ever paid full-time or part-time staff.

  • I think views on value lock-in matter a lot because of how they affect food tech (e.g. supporting The Good Food Institute). I place significant weight on this and a few other things (see this section of an SI page) that make me think GFI is actually a pretty good bet, despite my concern that technology progresses monotonically.

  • Because what might matter most is society's general concern for weird/small minds, we should be more sympathetic to indirect antispeciesism work like that done by Animal Ethics and the fundamental rights work of the Nonhuman Rights Project. From a near-term perspective, I don't think these look very good because I don't think we'll see fundamental rights be a big reducer of factory farm suffering.

  • This is a less-refined view of mine, but I'm less focused than I used to be on wild animal suffering. It just seems to cost a lot of weirdness points, and naturogenic suffering doesn't seem nearly as important as anthropogenic suffering in the far future. Factory farm suffering seems a lot more similar to far future dystopias than does wild animal suffering, despite WAS dominating utility calculations for the next, say, 50 years.

I could talk more about this if you'd like, especially if you're facing specific decisions like where exactly to donate in 2018 or what sort of job you're looking for with your skillset.

Comment by Jacy_Reese on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-21T16:38:47.981Z · EA · GW

I'm sympathetic to both of those points personally.

1) I considered that, and in addition to time constraints, I know others haven't written on this because there's a big concern of talking about it making it more likely to happen. I err more towards sharing it despite this concern, but I'm pretty uncertain. Even the detail of this post was more than several people wanted me to include.

But mostly, I'm just limited on time.

2) That's reasonable. I think all of these boundaries are fairly arbitrary; we just need to try to use the same standards across cause areas, e.g. considering only work with this as its explicit focus. Theoretically, since Neglectedness is basically just a heuristic to estimate how much low-hanging fruit there is, we're aiming at "The space of work that might take such low-hanging fruit away." In this sense, Neglectedness could vary widely. E.g. there's limited room for advocating (e.g. passing out leaflets, giving lectures) directly to AI researchers, but this isn't affected much by advocacy towards the general population.

I do think moral philosophy that leads to expanding moral circles (e.g. writing papers supportive of utiltiarianism), moral-circle-focused social activism (e.g. anti-racism, not as much something like campaigning for increased arts funding that seems fairly orthogonal to MCE), and EA outreach (in the sense that the A of EA means a wide moral circle) are MCE in the broadest somewhat-useful definition.

Caspar's blog post is a pretty good read on the nuances of defining/utilizing Neglectedness.

Comment by Jacy_Reese on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-21T13:54:31.961Z · EA · GW

That makes sense. If I were convinced hedonium/dolorium dominated to a very large degree, and that hedonium was as good as dolorium is bad, I would probably think the far future was at least moderately +EV.

Comment by Jacy_Reese on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-21T13:50:57.015Z · EA · GW


Comment by Jacy_Reese on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-21T04:13:56.166Z · EA · GW

Yeah, I think that's basically right. I think moral circle expansion (MCE) is closer to your list items than extinction risk reduction (ERR) is because MCE mostly competes in the values space, while ERR mostly competes in the technology space.

However, MCE is competing in a narrower space than just values. It's in the MC space, which is just the space of advocacy on what our moral circle should look like. So I think it's fairly distinct from the list items in that sense, though you could still say they're in the same space because all advocacy competes for news coverage, ad buys, recruiting advocacy-oriented people, etc. (Technology projects could also compete for these things, though there are separations, e.g. journalists with a social beat versus journalists with a tech beat.)

I think the comparably narrow space of ERR is ER, which also includes people who don't want extinction risk reduced (or even want it increased), such as some hardcore environmentalists, antinatalists, and negative utilitarians.

I think these are legitimate cooperation/coordination perspectives, and it's not really clear to me how they add up. But in general, I think this matters mostly in situations where you actually can coordinate. For example, in the US general election when Democrats and Republicans come together and agree not to give to their respective campaigns (in exchange for their counterpart also not doing so). Or if there were anti-MCE EAs with whom MCE EAs could coordinate (which I think is basically what you're saying with "we'd be better off if they both decided to spend the money on anti-malaria bednets").

Comment by Jacy_Reese on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-21T00:55:20.253Z · EA · GW

Thanks for the comment! A few of my thoughts on this:

Presumably we want some people working on both of these problems, some people have skills more suited to one than the other, and some people are just going to be more passionate about one than the other.

If one is convinced non-extinction civilization is net positive, this seems true and important. Sorry if I framed the post too much as one or the other for the whole community.

Much of the work related to AIA so far has been about raising awareness about the problem (eg the book Superintelligence), and this is more a social solution than a technical one.

Maybe. My impression from people working on AIA is that they see it as mostly technical, and indeed they think much of the social work has been net negative. Perhaps not Superintelligence, but at least the work that's been done to get media coverage and widespread attention without the technical attention to detail of Bostrom's book.

I think the more important social work (from a pro-AIA perspective) is about convincing AI decision-makers to use the technical results of AIA research, but my impression is that AIA proponents still think getting those technical results is probably the more important projects.

There's also social work in coordinating the AIA community.

First, I expect clean meat will lead to the moral circle expanding more to animals. I really don't see any vegan social movement succeeding in ending factory farming anywhere near as much as I expect clean meat to.

Sure, though one big issue with technology is that it seems like we can do far less to steer its direction than we can do with social change. Clean meat tech research probably just helps us get clean meat sooner instead of making the tech progress happen when it wouldn't otherwise. The direction of the far future (e.g. whether clean meat is ever adopted, whether the moral circle expands to artificial sentience) probably matters a lot more than the speed at which it arrives.

Of course, this gets very complicated very quickly, as we consider things like value lock-in. Sentience Institute has a bit of basic sketching on the topic on this page.

Second, I'd imagine that a mature science of consciousness would increase MCE significantly. Many people don't think animals are conscious, and almost no one thinks anything besides animals can be conscious

I disagree that "many people don't think animals are conscious." I almost exclusively hear that view in from the rationalist/LessWrong community. A recent survey suggested that 87.3% of US adults agree with the statement, "Farmed animals have roughly the same ability to feel pain and discomfort as humans," and presumably even more think they have at least some ability.

Advanced neurotechnologies could change that - they could allow us to potentially test hypotheses about consciousness.

I'm fairly skeptical of this personally, partly because I don't think there's a fact of the matter when it comes to whether a being is conscious. I think Brian Tomasik has written eloquently on this. (I know this is an unfortunate view for an animal advocate like me, but it seems to have the best evidence favoring it.)

Comment by Jacy_Reese on How to get a new cause into EA · 2018-01-13T15:14:15.853Z · EA · GW

I'd go farther here and say all three (global poverty, animal rights, and far future) are best thought of as target populations rather than cause areas. Moreover, the space not covered by these three is basically just wealthy modern humans, which seems to be much less of a treasure trove than the other three because WMHs have the most resources, far more than the other three populations. (Potentially there's also medium-term future beings as a distinct population, depending on where we draw the lines.)

I think EA would probably be discovering more things if we were focused on looking not for new cause areas but for new specific intervention areas, comparable to individual health support for the global poor (e.g. antimalarial nets, deworming pills), individual financial help for the global poor (e.g. unconditional cash transfers), individual advocacy of plant-based eating (e.g. leafleting, online ads), institutional farmed animal welfare reforms (e.g. cage-free eating), technical AI safety research, and general extinction risk policy work.

If we think of the EA cause area landscape in "intervention area" terms, there seems to be a lot more change happening.

Comment by Jacy_Reese on Survey of leaders in the EA community on a range of important topics, like what skills they need and what causes are most effective · 2017-11-04T15:52:18.891Z · EA · GW

Thanks for the response. My main general thought here is just that we shouldn't depend on so much from the reader. Most people, even most thoughtful EAs, won't read in full and come up with all the qualifications on their own, so it's important for article writers to include those themselves, and to include those upfront and center in their articles.

If you wanted to spend a lot of time on "what causes do EA leadership favor," one project I see as potentially really valuable is a list of arguments/evidence and getting EA leaders to vote on their weights. Sort of a combination of 80k's quantitative cause assessment and this survey. I think this is a more ideal peer-belief-aggregation because it reduces the effects of dependence. Like if Rob and Jacy both prioritize the far future entirely because of Bostrom's calculation of how many beings could exist in it, then we'd come up with that single argument having a high weight, rather than two people highly favoring the far future. We might try this approach at Sentience Institute at some point, though right now we're more focused on just coming up with the lists of arguments/evidence in the field of moral circle expansion, so instead we did something more like your 2017 survey of researchers in this field. (Specifically, we would have researchers rate the pieces of evidence listed on this page:

That's probably not the best approach, but I'd like a survey approach that somehow tries to minimize the dependence effect. A simpler version would be to just ask for people's opinions but them have them rate how much they're basing their views on the views of their peers, or just ask for their view and confidence while pretending like they've never heard peer views, but this sort of approach seems more vulnerable to bias than the evidence-rating method.

Anyway, have fun at EAG London! Curious if anything that happens there really surprises you.

Comment by Jacy_Reese on Survey of leaders in the EA community on a range of important topics, like what skills they need and what causes are most effective · 2017-11-04T01:29:41.770Z · EA · GW

[Disclaimer: Rob, 80k's Director of Research, and I briefly chatted about this on Facebook, but I want to make a comment here because that post is gone and more people will see it here. Also, as a potential conflict-of-interest, I took the survey and work at an organization that's between the animal and far future cause areas.]

This is overall really interesting, and I'm glad the survey was done. But I'm not sure how representative of EA community leaders it really is. I'd take the cause selection section in particular with a big grain of salt, and I wish it were more heavily qualified and discussed in different language. Of the organizations surveyed and number surveyed per organization, my personal count is that 14 were meta, 12.5 were far future, 3 were poverty, and 1.5 were animal. My guess is that a similar distribution holds for the 5 unaffiliated respondents. So it should be no surprise to readers that meta and far future work were most prioritized.* **

I think we shouldn't call this a general survey of EA leadership (e.g. the title of the post) when it's so disproportionate. I think the inclusion of more meta organization makes sense, but there are poverty groups like the Against Malaria Foundation and Schistosomiasis Control Initiative, as well as animal groups like The Good Food Institute and The Humane League, that seem to meet the same bar for EA-ness as the far future groups included like CSER and MIRI.

Focusing heavily on far future organizations might be partly due to selecting only organizations founded after the EA community coalesced, and while that seems like a reasonable metric (among several possibilities), is also seems biased towards far future work because that's a newer field and it's at least the reasonable metric that conveniently syncs up with 80k's cause prioritization views. Also, the ACE-recommended charity GFI was founded explicitly on the principle of effective altruism after EA coalesced. Their team says that quite frequently, and as far as I know, the leadership all identifies as EA. Perhaps you're using a metric more like social ties to other EA leaders, but that's exactly the sort of bias I'm worried about here.

Also, the EA community as a whole doesn't seem to hold this cause prioritization view ( Leadership can of course deviate from the broad community, but this is just another reason to be cautious in weighing these results.

I think your note about this selection is fair

  • "the group surveyed included many of the most clever, informed and long-involved people in the movement,"

and I appreciate that you looked a little at cause prioritization for relatively-unbiased subsets

  • "Views were similar among people whose main research work is to prioritise different causes – none of whom rated Global Development as the most effective,"
  • "on the other hand, many people not working in long-term focussed organisations nonetheless rated it as most effective"

but it's still important to note that you (Rob and 80k) personally favor these two areas strongly, which seems to create a big potential bias, and that we should be very cautious of groupthink in our community where updating based on the views of EA leaders is highly prized and recommended. I know the latter is a harder concern to get around with a survey, but I think it should have been noted in the report, ideally in the Key Figures section. And as I mentioned at the beginning, I don't think this should be discussed as a general survey of EA leaders, at least not when it comes to cause prioritization.

This post certainly made me more worried personally that my prioritization of the far future could be more due to groupthink than I previously thought.

Here's the categorization I'm using for organizations. It might be off, but it's at least pretty close. ff = far future

80,000 Hours (3) meta AI Impacts (1) ff Animal Charity Evaluators (1) animal Center for Applied Rationality (2) ff Centre for Effective Altruism (3) meta Centre for the Study of Existential Risk (1) ff Charity Science: Health (1) poverty DeepMind (1) ff Foundational Research Institute (2) ff Future of Humanity Institute (3) ff GiveWell (2) poverty Global Priorities Institute (1) meta Leverage Research (1) meta Machine Intelligence Research Institute (2) ff Open Philanthropy Project (5) meta Rethink Charity (1) meta Sentience Institute (1) animal/ff Unaffiliated (5)

*The 80k post notes that not everyone filled out all the survey answers, e.g. GiveWell only had one person fill out the cause selection section.

**Assuming the reader has already seen other evidence, e.g. that CFAR only recently adopted a far future mission, or that people like Rob went from other cause areas towards a focus on the far future.

Comment by Jacy_Reese on Should EAs think twice before donating to GFI? · 2017-09-06T14:53:45.669Z · EA · GW

As of December 2016, my impression was that ACE wasn't and hadn't shifted towards a hits based or more risk-averse approach. I don't know if this is because they already were more in that direction than Rob thinks, or because they didn't move to the hits based position Rob thinks they currently have.

[I worked for ACE on the board then as a researcher until December 2016. This is just my personal opinion.]