Posts

Artificial Intelligence, Morality, and Sentience (AIMS) Survey: 2021 2022-07-01T07:47:31.984Z
Jacy's Shortform 2022-06-30T14:05:59.129Z
The Future Might Not Be So Great 2022-06-30T13:01:21.617Z
Apology 2019-03-22T23:05:41.142Z
2018 list of half-baked volunteer research ideas 2018-09-19T07:58:34.213Z
Why I prioritize moral circle expansion over reducing extinction risk through artificial intelligence alignment 2018-02-20T18:29:12.819Z
Introducing Sentience Institute 2017-06-02T14:43:45.784Z
Why Animals Matter for Effective Altruism 2016-08-22T16:50:40.800Z
Some considerations for different ways to reduce x-risk 2016-02-04T03:21:09.823Z
EA Interview Series, January 2016: Perumal Gandhi, Cofounder of Muufri 2016-01-12T17:02:13.987Z
EA Interview Series: Michelle Hutchinson, December 2015 2015-12-22T15:46:58.036Z
Why EA events should be (at least) vegetarian 2015-11-13T17:37:25.326Z

Comments

Comment by Jacy on The Future Might Not Be So Great · 2022-07-07T13:21:02.783Z · EA · GW

Thanks for going into the methodological details here.

I think we view "double-counting" differently, or I may not be sufficiently clear in how I handle it. If we take a particular war as a piece of evidence, which we think fits into both "Historical Harms" and "Disvalue Through Intent," and it is overall -8 evidence on the EV of the far future, but it seems 75% explained through "Historical Harms" and 25% explained through "Disvalue Through Intent," then I would put -6 weight on the former and -2 weight on the latter. I agree this isn't very precise, and I'd love future work to go into more analytical detail (though as I say in the post, I expect more knowledge per effort from empirical research).

I also think we view "reasons for negative weight" differently. To me, the existence of analogues to intrusion does not make intrusion a non-reason. It just means we should also weigh those analogues. Perhaps they are equally likely and equal in absolute value if they obtain, in which case they would cancel, but usually there is some asymmetry. Similarly, duplication and nesting are factors that are more negative than positive to me, such as because we may discount and neglect the interests of these minds because they are more different and more separated from the mainstream (e.g., the nested minds are probably not out in society campaigning for their own interests because they would need to do so through the nest mind—I think you allude to this, but I wouldn't dismiss it merely because we'll learn how experiences work, such as because we have very good neuroscientific and behavioral evidence of animal consciousness in 2022 but still exploit animals).

Your points on interaction effects and nonlinear variation are well-taken and good things to account for in future analyses. In a back-of-the-envelope estimate, I think we should just assign values numerically and remember to feel free to widely vary those numbers, but of course there are hard-to-account-for biases in such assignment, and I think the work of GJP, QURI, etc. can lead to better estimation methods.

Comment by Jacy on The Future Might Not Be So Great · 2022-07-07T13:10:58.455Z · EA · GW

This is helpful data. Two important axes of variation here are:

- Time, where this has fortunatley become more frequently discussed in recent years
- Involvement, where I speak a lot with artificial intelligence and machine learning researches who work on AI safety but not global priorities research; often their motivation was just reading something like Life 3.0. I think these people tend to have thought through crucial considerations less than, say, people on this forum.

Comment by Jacy on Person-affecting intuitions can often be money pumped · 2022-07-07T12:55:02.050Z · EA · GW

Trade 3 is removing a happy person, which is usually bad in a person-affecting view, possibly bad enough to not be worth less than $0.99 and thus not be Dutch booked. 

Comment by Jacy on The Future Might Not Be So Great · 2022-07-02T20:04:41.349Z · EA · GW

Hi Khorton, I wouldn't describe it as stepping back into the community, and I don't plan on doing that, regardless of this issue, unless you consider occasional posts and presentations or socializing with my EA friends as such. This post on the EV of the future was just particularly suited for the EA Forum (e.g., previous posts on it), and it's been 3 years since I published that public apology and have done everything asked of me by the concerned parties (around 4 years since I was made aware of the concerns, and I know of no concerns about my behavior since then).

I'm not planning to comment more here. This is in my opinion a terrible place to have these conversations, as Dony pointed out as well.

Comment by Jacy on RyanCarey's Shortform · 2022-07-02T17:34:43.561Z · EA · GW

[Edit: I've now made some small additions to the post to better ensure readers do not get the impressions that you're worried about. The substantive content of the post remains the same, and I have not read any disagreements with it, though please let me know if there are any.]

Thanks for clarifying. I see the connection between both sets of comments, but the draft comments still seem more like 'it might be confusing whether this is about your experience in EA or an even-coverage history', while the new comments seem more like 'it might give the impression that Felicifia utilitarians and LessWrong rationalists had a bigger role, that GWWC and 80k didn't have student groups, that EA wasn't selected as a name for CEA in 2011, and that you had as much influence in building EA as a as Will or Toby.' These seem meaningfully different, and while I adjusted for the former, I didn't adjust for the latter.

(Again, I will add some qualification as soon as I can, e.g., noting that there were other student groups, which I'm happy to note but just didn't because that is well-documented and not where I was personally most involved.)

Comment by Jacy on The Future Might Not Be So Great · 2022-07-02T13:43:59.702Z · EA · GW

Thanks. I agree with essentially all of this, and I left a comment with details: https://forum.effectivealtruism.org/posts/ZbdNFuEP2zWN5w2Yx/ryancarey-s-shortform?commentId=oxodp9BzigZ5qgEHg

I would reiterate that this was only on my website for a few weeks, and I removed it as soon as I got the negative feedback. [Edit: As I say in my detailed comment, I viewed the term "co-founder" in terms of the broad base of people who built EA as a social movement. Others read it as a narrower term, such as the 1-3 co-founders of a typical company or nonprofit. Now I just avoid the term because I think it's too vague and confusing.]

Comment by Jacy on RyanCarey's Shortform · 2022-07-02T13:41:40.764Z · EA · GW

[Edit: I've now made some small additions to the post to better ensure readers do not get the impressions that you're worried about. The substantive content of the post remains the same, and I have not read any disagreements with it, though please let me know if there are any.]

I think I agree with essentially all of this, though I would have preferred if you gave this feedback when you were reading the draft because I would have worded my comments to ensure they don't give the impression you're worried about. I strongly agree with your guess that EA would probably have come to exist without Will and Toby, and I would extend that to a guess for any small group. Of course such guesses are very speculative.

I would also emphasize my agreement with the claim that the Oxford community played a large role than Felicifia or THINK, but I think EA's origins were broader and more diverse than most people think. My guess for Will and Toby's % of the hours put into "founding" it would be much lower than your 20%.

On the co-founder term, I think of founders as much broader than the founders of, say, a company. EA has been the result of many people's efforts, many of whom I think are ignored or diminished in some tellings of EA history. That being said, I want to emphasize that I think this was only on my website for a few weeks at most, and I removed it shortly after I first received negative feedback on it. I believe I also casually used the term elsewhere, and it was sometimes used by people in my bio description when introducing me as a speaker. Again, I haven't used it since 2019.

I emphasize Felicifia in my comments because that is where I have the most first- hand experience to contribute, its history hasn't been as publicized as others, and I worry that many (most?) people hearing these histories think the history of EA was more centralized in Oxford than it was, in my opinion.

I'm glad you shared this information, and I will try to improve and clarify the post asap.

Comment by Jacy on The Future Might Not Be So Great · 2022-07-01T17:49:10.119Z · EA · GW

Good catch!

Comment by Jacy on The Future Might Not Be So Great · 2022-07-01T16:01:22.680Z · EA · GW

Hi John, just to clarify some inaccuracies in your two comments:

- I’ve never harassed anyone, and I’ve never stated or implied that I have.  I have apologized for making some people uncomfortable with “coming on too strong” in my online romantic advances. As I've said before in that Apology, I never intended to cause any discomfort, and I’m sorry that I did so. There have, to my knowledge, been no concerns about my behavior since I was made aware of these concerns in mid-2018.

- I didn’t lie on my website. I had (in a few places) described myself as a “co-founder” of EA [Edit: Just for clarity, I think this was only on my website for a few weeks? I think I mentioned it and was called it a few times over the years too, such as when being introduced for a lecture. I co-founded the first dedicated student group network,  helped set up and moderate the first social media discussion groups, and was one of the first volunteers at ACE as  a college student. I always favored a broader-base view of how EA emerged than what many perceived at the time (e.g., more like the founders of a social movement than of a company). Nobody had pushed back against "co-founder" until 2019, and I stopped using the term as soon as there was any pushback.], as I think many who worked to build EA from 2008-2012 could be reasonably described. I’ve stopped using the term because of all the confusion, which I describe a bit in “Some Early History of Effective Altruism.”

- Regarding SI, we were already moving on from CEA’s fiscal sponsorship and donation platform once we got our 501c3 certification in February 2019, so “stopped” and “severed ties” seem misleading.

- CEA did not make me write an apology. We agreed on both that apology document and me not attending CEA events as being the right response to these concerns. I had already written several apologies that were sent privately to various parties without any involvement from CEA.

- There was no discussion of my future posting on the EA Forum, nor to my knowledge any concerns about my behavior on this or other forums.

Otherwise, I have said my piece in the two articles you link, and I don’t plan to leave any more comments in this thread. I appreciate everyone’s thoughtful consideration.

Comment by Jacy on The Future Might Not Be So Great · 2022-07-01T04:59:27.061Z · EA · GW

Thanks! Fixed, I think.

Comment by Jacy on The Future Might Not Be So Great · 2022-07-01T00:10:29.618Z · EA · GW

It's great to know where your specific weights differ! I agree that each of the arguments you put forth are important. Some specifics:

  • I agree that differences in the future (especially the weird possibilities like digital minds and acausal trade) is a big reason to discount historical evidence. Also, by these lights, some historical evidence (e.g., relations across huge gulfs of understanding and ability like from humans to insects) seems a lot more important than others (e.g., the fact that animal muscle and fat happens to be an evolutionarily advantageous food source).
  • I'm not sure if I'd agree that historical harms have occurred largely through divergence; there are many historical counterfactuals that could have prevented many harms: the nonexistence of humans, an expansion of the moral circle, better cooperation, discovery of a moral reality, etc.. In many cases, a positive leap in any of these would have prevented the atrocity.  What makes divergence more important? I would make the case based on something like "maximum value impact from one standard deviation change" or "number of cases where harm seemed likely but this factor prevented it." You could write an EA Forum post going into more detail on that. I would be especially excited for you to go through specific historical events and do some reading to estimate the role of (small changes in) each of these forces.
  • As I mention in the post, reasons to put negative weight on DMPS include the vulnerability of digital minds to intrusion, copying, etc., the likelihood of their instrumental usefulness in various interstellar projects, and the possibility of many nested minds who may be ignored or neglected.
  • I agree moral trade is an important mechanism of reasoned cooperation.

I'm really glad you put your own numbers in the spreadsheet! That's super useful. The ease of flipping the estimates from negative to positive and positive to negative is one reason I only make the conclusion "not highly positive" or "close to zero" and not going with the mean estimate from myself and others (which would probably be best described as moderately negative, e.g., the average at an EA meetup where I presented this work was around -10).

I think your analysis is on the right track to getting us better answers to these crucial questions :)

Comment by Jacy on The Future Might Not Be So Great · 2022-06-30T15:20:21.409Z · EA · GW

Whoops! Thanks!

Comment by Jacy on Fanatical EAs should support very weird projects · 2022-06-30T14:35:22.558Z · EA · GW

This is super interesting. Thanks for writing it. Do you think you're conflating several analytically distinct phenomena when you say (i) "Fanaticism is the idea that we should base our decisions on all of the possible outcomes of our actions no matter how unlikely they are ... base our decisions on all of the possible outcomes of our actions no matter how unlikely they are EA fanatics take a roughly maximize expected utility approach" and (ii) "Fanaticism is unreasonable"?

For (i), I mainly have in mind two approaches "fanatics" could be defined by: (ia) "do a quick back-of-the-envelope calculation of expected utility and form beliefs based solely on its output," and (ib) "do what you actually think maximizes expected utility, no matter whether that's based on a spreadsheet, heuristic, intuition, etc." I think (ia) isn't something basically anyone would defend, while (ib) is something I and many others would (and it's how I think "fanaticism" tends to be used). And for (ib), we need to account for heuristics like, (f) quick BOTE calculations tend to overestimate the expected utility of low probabilities of high impact, and (g) extremely large and extremely small numbers should be sandboxed (e.g., capped in the influence they can have on the conclusion). This is a (large) downside of these "very weird projects," and I think it makes the "should support" case a lot weaker.

For (ii), I mainly have in mind three claims about fanaticism: (iia) "Fanaticism is unintuitive," (iib) "Fanaticism is absurd (a la reductio ad absurdum," and (iic) "Fanaticism breaks some utility axioms." These each have different evidence . For example, (iia) might not really matter if we don't think our intuitions—which have been trained through evolution and life experience—are reliable for such unusual questions like maximizing long-run aggregate utility.

Did you have some of these in mind? Or maybe other operationalizations?

Comment by Jacy on Jacy's Shortform · 2022-06-30T14:05:59.319Z · EA · GW

Brief Thoughts on the Prioritization of Quality Risks

This is a brief shortform post to accompany "The Future Might Not Be So Great." These are just some scattered thoughts on the prioritization of quality risks not quite relevant enough to go in the post itself. Thanks to those who gave feedback on the draft of that post, particularly on this section.

People ask me to predict the future, when all I want to do is prevent it. Better yet, build it. Predicting the future is much too easy, anyway. You look at the people around you, the street you stand on, the visible air you breathe, and predict more of the same. To hell with more. I want better. ⸻ Ray Bradbury (1979)

I present a more detailed argument for the prioritization of quality risks (particularly moral circle expansion) over extinction risk reduction (particularly through certain sorts of AI research) in Anthis (2018), but here I will briefly note some thoughts on importance, tractability, and neglectedness. Two related EA Forum posts are “Cause Prioritization for Downside-Focused Value Systems” (Gloor 2018) and “Reducing Long-Term Risks from Malevolent Actors” (Althaus and Baumann 2020). Additionally, at this early stage of the longtermist movement, the top priorities for population and quality risk may largely intersect. Both issues suggest foundational research of topics such as the nature of AI control and likely trajectories of the long-term future, community-building of thoughtful do-gooders, and field-building of institutional infrastructure to use for steering the long-term future.

Importance

One important application of the EV of human expansion is to the “importance” of population and quality risks. Importance can be operationalized as the good done if the entire cause succeeded in solving its corresponding problem, such as the good done by eliminating or substantially reducing extinction risk, which is effectively zero if the EV of human expansion is zero and effectively negative if the EV of human expansion is negative.

The importance of quality risk reduction is clearer, in the sense that the difference in quality between possible futures is clearer than the difference in extinction and non-extinction, and larger, in the sense that while population risk entails only the range of zero-to-positive difference between human extinction and non-extinction (or population risk between zero population and some positive number of individuals) across quality risk entails the difference between the best quality humans could engender and the worst, across all possible population sizes. This is arguably a weakness of the framework because we could categorize the quality risk cause area as smaller in importance (say, an increase of 1 trillion utils, i.e., units of goodness), and it would tend to become more tractable as we narrow the category.

Tractability

The tractability difference between population and quality risk seems the least clear of the three criteria. My general approach is thinking through the most likely “theories of change” or paths to impact and assessing them step-by-step. For example, one commonly discussed extinction risk reduction path to impact is “agent foundations,” building mathematical frameworks and formally proving claims about the behavior of intelligent agents, which would then allow us to build advanced AI systems more likely to do what we tell them to do, and then using these frameworks to build AGI or persuading the builders of AGI to use them. Quality-risk-focused AI safety strategies may be more focused on the outer alignment problem, ensuring that an AI’s objective is aligned with the right values, rather than just the inner alignment problem, ensuring that all actions of the AI are aligned with the objective.[1] Also, we can influence quality by steering the “direction” or “speed” of the long-term future, approaches with potentially very different impact, hinging on factors such as the distribution of likely futures across value and likelihood (e.g., Anthis 2018c; Anthis and Paez 2021).

One argument that I often hear on the tractability of trajectory changes is that changes need to “stick” or “persist” over long periods. It is true that there needs to be a persistent change in the expected value (i.e., the random variable or time series regime of value in the future), but I frequently hear the claim that there needs to be a persistent change in the realization of that value. For example, if we successfully broker a peace deal between great powers, neither the peace deal itself nor any other particular change in the world has to persist in order for this to have high long-term impact. The series of values itself can have arbitrarily large variance, such as it being very likely that the peace deal is broken within a decade.

For a sort of change to be intractable, it needs to not just lack persistence, but to rubber band (i.e., create opposite-sign effects) back to its counterfactual. For example, if brokering a peace deal causes an equal and opposite reaction of anti-peace efforts, then that trajectory change is intractable. Moreover, we should not only consider rubber banding but dominoing (i.e., create same-sign effects), perhaps because of how this peace deal inspires other great powers to follow suit even if this particular deal is broken. There is much of this potential energy in the world waiting to be unlocked by thoughtful actors.

The tractability of trajectory change has been the subject of research at Sentience Institute, including our historical case studies and “Harris’ (2019)” How Tractable Is Changing the Course of History?”

Neglectedness

The neglectedness difference between population and quality risk seems the most clear. There are far more EAs and longtermists working explicitly on population risks than on quality risks (i.e., risks to the moral value of individuals in the long-term future). Two nuances for this claim are first that it may not be true for other relevant comparisons: For example, many people in the world are trying to change social institutions, such as different sides of the political spectrum trying to pull public opinion towards their end of the spectrum. This group seems much larger than people focused explicitly on extinction risks, and there are many other relevant reference classes. Second, it is not entirely clear whether extinction risk reduction and quality risk reduction face higher or lower returns to being less neglected (i.e., more crowded). It may be that so few people are focused on quality risks that marginal returns are actually lower than they would be if there were more people working on them (i.e., increasing returns).


  1. In my opinion, there are many different values involved in developing and deploying an AI system, so the distinction between inner and outer alignment is rarely precise in practice. Much of identifying and aligning with “good” or “correct” values can be described as outer alignment. In general, I think of AI value alignment as a long series of mechanisms from the causal factors that create human values (which themselves can be thought of as objective functions) to a tangled web of objectives in each human brain (e.g., values, desires, preferences) to a tangled web of social objectives aggregated across humans (e.g., voting, debates, parliaments, marketplaces) to a tangled web of objectives communicated from humans to machines (e.g., material values in game-playing AI, training data, training labels, architectures) to a tangled web of emergent objectives in the machines (e.g., parametric architectures in the neural net, (smoothed) sets of possible actions in domain, (smoothed) sets of possible actions out of domain) and finally to the machine actions (i.e., what it actually does in the world). We can reasonably refer to the alignment of any of these objects with any of the other objects in this long, tangled continuum of values. Two examples of outer alignment work that I have in mind here are Askell et al. (2021) “A General Language Assistant as a Laboratory for Alignment” and Hobbhan et al. (2022) “Reflection Mechanisms as an Alignment Target: A Survey.” ↩︎

Comment by Jacy on quinn's Shortform · 2022-06-25T16:28:54.166Z · EA · GW

Jamie Harris at Sentience Institute authored a report on "Social Movement Lessons From the US Anti-Abortion Movement" that may be of interest.

Comment by Jacy on Steering AI to care for animals, and soon · 2022-06-16T11:28:42.272Z · EA · GW

That's right that we don't have any ongoing projects exclusively on the impact of AI on nonhuman biological animals, though much of our research includes that, especially the outer alignment idea of ensuring an AGI or superintelligence accounts for the interests about all sentient beings, including wild and domestic  nonhuman biological animals. We also have several empirical projects where we collect data on both moral concern for animals and for AI, such as on perspective-taking, predictors of moral concern, and our recently conducted US nationally representative survey on Artificial Intelligence, Morality, and Sentience (AIMS).

For various reasons discussed in those nonhumans and the long-term future posts and in essays like "Advantages of Artificial Intelligences, Uploads, and Digital Minds" (Sotala 2012), biological nonhuman animals seem less likely to exist in very large numbers in the long-term future than animal-like digital minds. That doesn't mean we shouldn't work on the impact of AI on those biological nonhuman animals, but it has made us prioritize laying groundwork on the nature of moral concern and the possibility space of future sentience. I can say that we have a lot of researcher applicants propose agendas focused more directly on AI and biological nonhuman animals, and we're in principle very open to it. There are far more promising research projects in this space than we can fund at the moment. However, I don't think Sentience Institute's comparative advantage is working directly on research projects like CETI or Interspecies Internet that wade through the detail of animal ethology or neuroscience using machine learning, though I'd love to see a blog-depth analysis of the short-term and long-term potential impacts of such projects, especially if there are more targeted interventions (e.g., translating farmed animal vocalizations) that could be high-leverage for EA.

Comment by Jacy on Steering AI to care for animals, and soon · 2022-06-14T11:12:29.068Z · EA · GW

Good points! This is exactly the sort of work we do at Sentience Institute on moral circle expansion (mostly for farmed animals from 2016 to 2020, but since late 2020, most of our work has been directly on AI—and of course the intersections), and it has been my priority since 2014. Also, Peter Singer and Yip Fai Tse are working on "AI Ethics: The Case for Including Animals"; there are a number of EA Forum posts on nonhumans and the long-term future; and the harms of AI and "smart farming" for farmed animals is a common topic, such as this recent article that I was quoted in. My sense from talking to many people in this area is that there is substantial room for more funding; we've gotten some generous support from EA megafunders and individuals, but we also consistently get dozens of highly qualified applicants whom we have to reject every hiring round, including people with good ideas for new projects.

Comment by Jacy on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-27T02:07:12.575Z · EA · GW

Same perspective here! Thank you for sharing.

Comment by Jacy on The expected value of extinction risk reduction is positive · 2018-12-30T20:36:53.614Z · EA · GW

Oh, sorry, I was thinking of the arguments in my post, not (only) those in your post. I should have been more precise in my wording.

Comment by Jacy on The expected value of extinction risk reduction is positive · 2018-12-27T00:22:33.784Z · EA · GW

Thank you for the reply, Jan, especially noting those additional arguments. I worry that your article neglects them in favor of less important/controversial questions on this topic. I see many EAs taking the "very unlikely that [human descendants] would see value exactly where we see disvalue" argument (I'd call this the 'will argument,' that the future might be dominated by human-descendant will and there is much more will to create happiness than suffering, especially in terms of the likelihood of hedonium over dolorium) and using that to justify a very heavy focus on reducing extinction risk, without exploration of those many other arguments. I worry that much of the Oxford/SF-based EA community has committed hard to reducing extinction risk without exploring those other arguments.

It'd be great if at some point you could write up discussion of those other arguments, since I think that's where the thrust of the disagreement is between people who think the far future is highly positive, close to zero, and highly negative. Though unfortunately, it always ends up coming down to highly intuitive judgment calls on these macro-socio-technological questions. As I mentioned in that post, my guess is that long-term empirical study like the research in The Age of Em or done at Sentience Institute is our best way of improving those highly intuitive judgment calls and finally reaching agreement on the topic.

Comment by Jacy on The expected value of extinction risk reduction is positive · 2018-12-16T23:19:27.145Z · EA · GW

Thanks for posting on this important topic. You might be interested in this EA Forum post where I outlined many arguments against your conclusion, the expected value of extinction risk reduction being (highly) positive.

I do think your "very unlikely that [human descendants] would see value exactly where we see disvalue" argument is a viable one, but I think it's just one of many considerations, and my current impression of the evidence is that it's outweighed.

Also FYI the link in your article to "moral circle expansion" is dead. We work on that approach at Sentience Institute if you're interested.

Comment by Jacy on Why I'm focusing on invertebrate sentience · 2018-12-09T15:43:51.806Z · EA · GW

I remain skeptical of how much this type of research will influence EA-minded decisions, e.g. how many people would switch donations from farmed animal welfare campaigns to humane insecticide campaigns if they increased their estimate of insect sentience by 50%? But I still think the EA community should be allocating substantially more resources to it than they are now, and you seem to be approaching it in a smart way, so I hope you get funding!

I'm especially excited about the impact of this research on general concern for invertebrate sentience (e.g. establishing norms that there are at least some smart humans are actively working on insect welfare policy) and on helping humans better consider artificial sentience when important tech policy decisions are made (e.g. on AI ethics).

Comment by Jacy on 2018 list of half-baked volunteer research ideas · 2018-09-19T08:00:28.803Z · EA · GW

[1] Cochrane mass media health articles (and similar):

  • Targeted mass media interventions promoting healthy behaviours to reduce risk of non-communicable diseases in adult, ethnic minorities
  • Mass media interventions for smoking cessation in adults
  • Mass media interventions for preventing smoking in young people.
  • Mass media interventions for promoting HIV testing
  • Smoking cessation media campaigns and their effectiveness among socioeconomically advantaged and disadvantaged populations
  • Population tobacco control interventions and their effects on social inequalities in smoking: systematic review
  • Are physical activity interventions equally effective in adolescents of low and high socioeconomic status (SES): results from the European Teenage project
  • The effectiveness of nutrition interventions on dietary outcomes by relative social disadvantage: a systematic review
  • Use of folic acid supplements, particularly by low-income and young women: a series of systematic reviews to inform public health policy in the UK
  • Use of mass media campaigns to change health behaviour
  • The role of the media in promoting and reducing tobacco use
  • Getting to the Truth: Evaluating National Tobacco Countermarketing Campaigns
  • Effect of televised, tobacco company-funded smoking prevention advertising on youth smoking-related beliefs, intentions, and behavior
  • Do mass media campaigns improve physical activity? a systematic review and meta-analysis
Comment by Jacy on Which piece got you more involved in EA? · 2018-09-15T09:37:23.723Z · EA · GW

I can't think of anything that isn't available in a better form now, but it might be interesting to read for historical perspective, such as what it looks like to have key EA ideas half-formed. This post on career advice is a classic. Or this post on promoting Buddhism as diluted utilitarianism, which is similar to the reasoning a lot of utilitarians had for building/promoting EA.

Comment by Jacy on Which piece got you more involved in EA? · 2018-09-07T15:33:46.339Z · EA · GW

The content on Felicifia.org was most important in my first involvement, though that website isn't active anymore. I feel like forum content (similar to what could be on the EA Forum!) was important because it's casually written and welcoming. Everyone was working together on the same problems and ideas, so I felt eager to join.

Comment by Jacy on Leverage Research: reviewing the basic facts · 2018-08-04T07:22:29.832Z · EA · GW

Just to add a bit of info: I helped with THINK when I was a college student. It wasn't the most effective strategy (largely, it was founded before we knew people would coalesce so strongly into the EA identity, and we didn't predict that), but Leverage's involvement with it was professional and thoughtful. I didn't get any vibes of cultishness from my time with THINK, though I did find Connection Theory a bit weird and not very useful when I learned about it.

Comment by Jacy on Excerpt from 'Doing Good Better': How Vegetarianism Decreases Animal Product Supply · 2018-04-18T13:30:30.477Z · EA · GW

I get it pretty frequently from newcomers (maybe in the top 20 questions for animal-focused EA?), but everyone seems convinced by a brief explanation of how there's still a small chance of big purchasing changes even if every small consumption change doesn't always lead to a purchasing change.

Comment by Jacy on Why I prioritize moral circle expansion over reducing extinction risk through artificial intelligence alignment · 2018-02-27T17:36:12.463Z · EA · GW

Exactly. Let me know if this doesn't resolve things, zdgroff.

Comment by Jacy on Why I prioritize moral circle expansion over reducing extinction risk through artificial intelligence alignment · 2018-02-27T17:35:03.070Z · EA · GW

Yes, terraforming is a big way in which close-to-WAS scenarios could arise. I do think it's smaller in expectation than digital environments that develop on their own and thus are close-to-WAS.

I don't think terraforming would be done very differently than today's wildlife, e.g. done without predation and diseases.

Ultimately I still think the digital, not-close-to-WAS scenarios seem much larger in expectation.

Comment by Jacy on Why I prioritize moral circle expansion over reducing extinction risk through artificial intelligence alignment · 2018-02-27T17:32:07.211Z · EA · GW

I'd qualify this by adding that the philosophical-type reflection seems to lead in expectation to more moral value (positive or negative, e.g. hedonium or dolorium) than other forces, despite overall having less influence than those other forces.

Comment by Jacy on Why I prioritize moral circle expansion over reducing extinction risk through artificial intelligence alignment · 2018-02-27T17:29:35.668Z · EA · GW

Thanks for commenting, Lukas. I think Lukas, Brian Tomasik, and others affiliated with FRI have thought more about this, and I basically defer to their views here, especially because I haven't heard any reasonable people disagree with this particular point. Namely, I agree with Lukas that there seems to be an inevitable tradeoff here.

Comment by Jacy on Why I prioritize moral circle expansion over reducing extinction risk through artificial intelligence alignment · 2018-02-23T16:20:42.917Z · EA · GW

I just took it as an assumption in this post that we're focusing on the far future, since I think basically all the theoretical arguments for/against that have been made elsewhere. Here's a good article on it. I personally mostly focus on the far future, though not overwhelmingly so. I'm at something like 80% far future, 20% near-term considerations for my cause prioritization decisions.

This may take a few decades, but social change might take even longer.

To clarify, the post isn't talking about ending factory farming. And I don't think anyone in the EA community thinks we should try to end factory farming without technology as an important component. Though I think there are good reasons for EAs to focus on the social change component, e.g. there is less for-profit interest in that component (most of the tech money is from for-profit companies, so it's less neglected in this sense).

Comment by Jacy on Why I prioritize moral circle expansion over reducing extinction risk through artificial intelligence alignment · 2018-02-22T19:02:31.546Z · EA · GW

Hm, yeah, I don't think I fully understand you here either, and this seems somewhat different than what we discussed via email.

My concern is with (2) in your list. "[T]hey do not wish to be convinced to expand their moral circle" is extremely ambiguous to me. Presumably you mean they -- without MCE advocacy being done -- wouldn't put in wide-MC* values or values that lead to wide-MC into an aligned AI. But I think it's being conflated with, "they actively oppose" or "they would answer 'no' if asked, 'Do you think your values are wrong when it comes to which moral beings deserve moral consideration?'"

I think they don't actively oppose it, they would mostly answer "no" to that question, and it's very uncertain if they will put the wide-MC-leading values into an aligned AI. I don't think CEV or similar reflection processes reliably lead to wide moral circles. I think they can still be heavily influenced by their initial set-up (e.g. what the values of humanity when reflection begins).

This leads me to think that you only need (2) to be true in a very weak sense for MCE to matter. I think it's quite plausible that this is the case.

*Wide-MC meaning an extremely wide moral circle, e.g. includes insects, small/weird digital minds.

Comment by Jacy on Why I prioritize moral circle expansion over reducing extinction risk through artificial intelligence alignment · 2018-02-22T14:53:57.524Z · EA · GW

I personally don't think WAS is as similar to the most plausible far future dystopias, so I've been prioritizing it less even over just the past couple of years. I don't expect far future dystopias to involve as much naturogenic (nature-caused) suffering, though of course it's possible (e.g. if humans create large numbers of sentient beings in a simulation, but then let the simulation run on its own for a while, then the simulation could come to be viewed as naturogenic-ish and those attitudes could become more relevant).

I think if one wants something very neglected, digital sentience advocacy is basically across-the-board better than WAS advocacy.

That being said, I'm highly uncertain here and these reasons aren't overwhelming (e.g. WAS advocacy pushes on more than just the "care about naturogenic suffering" lever), so I think WAS advocacy is still, in Gregory's words, an important part of the 'far future portfolio.' And often one can work on it while working on other things, e.g. I think Animal Charity Evaluators' WAS content (e.g. ]guest blog post by Oscar Horta](https://animalcharityevaluators.org/blog/why-the-situation-of-animals-in-the-wild-should-concern-us/)) has helped them be more well-rounded as an organization, and didn't directly trade off with their farmed animal content.

Comment by Jacy on Why I prioritize moral circle expansion over reducing extinction risk through artificial intelligence alignment · 2018-02-21T23:47:25.384Z · EA · GW

Those considerations make sense. I don't have much more to add for/against than what I said in the post.

On the comparison between different MCE strategies, I'm pretty uncertain which are best. The main reasons I currently favor farmed animal advocacy over your examples (global poverty, environmentalism, and companion animals) are that (1) farmed animal advocacy is far more neglected, (2) farmed animal advocacy is far more similar to potential far future dystopias, mainly just because it involves vast numbers of sentient beings who are largely ignored by most of society. I'm not relatively very worried about, for example, far future dystopias where dog-and-cat-like-beings (e.g. small, entertaining AIs kept around for companionship) are suffering in vast numbers. And environmentalism is typically advocating for non-sentient beings, which I think is quite different than MCE for sentient beings.

I think the better competitors to farmed animal advocacy are advocating broadly for antispeciesism/fundamental rights (e.g. Nonhuman Rights Project) and advocating specifically for digital sentience (e.g. a larger, more sophisticated version of People for the Ethical Treatment of Reinforcement Learners). There are good arguments against these, however, such as that it would be quite difficult for an eager EA to get much traction with a new digital sentience nonprofit. (We considered founding Sentience Institute with a focus on digital sentience. This was a big reason we didn't.) Whereas given the current excitement in the farmed animal space (e.g. the coming release of "clean meat," real meat grown without animal slaughter), the farmed animal space seems like a fantastic place for gaining traction.

I'm currently not very excited about "Start a petting zoo at Deepmind" (or similar direct outreach strategies) because it seems like it would produce a ton of backlash because it seems too adversarial and aggressive. There are additional considerations for/against (e.g. I worry that it'd be difficult to push a niche demographic like AI researchers very far away from the rest of society, at least the rest of their social circles; I also have the same traction concern I have with advocating for digital sentience), but this one just seems quite damning.

The upshot is that, even if there are some particularly high yield interventions in animal welfare from the far future perspective, this should be fairly far removed from typical EAA activity directed towards having the greatest near-term impact on animals. If this post heralds a pivot of Sentience Institute to directions pretty orthogonal to the principal component of effective animal advocacy, this would be welcome indeed.

I agree this is a valid argument, but given the other arguments (e.g. those above), I still think it's usually right for EAAs to focus on farmed animal advocacy, including Sentience Institute at least for the next year or two.

(FYI for readers, Gregory and I also discussed these things before the post was published when he gave feedback on the draft. So our comments might seem a little rehearsed.)

Comment by Jacy on Why I prioritize moral circle expansion over reducing extinction risk through artificial intelligence alignment · 2018-02-21T22:09:37.908Z · EA · GW

Thanks! That's very kind of you.

I'm pretty uncertain about the best levers, and I think research can help a lot with that. Tentatively, I do think that MCE ends up aligning fairly well with conventional EAA (perhaps it should be unsurprising that the most important levers to push on for near-term values are also most important for long-term values, though it depends on how narrowly you're drawing the lines).

A few exceptions to that:

  • Digital sentience probably matters the most in the long run. There are good reasons to be skeptical we should be advocating for this now (e.g. it's quite outside of the mainstream so it might be hard to actually get attention and change minds; it'd probably be hard to get funding for this sort of advocacy (indeed that's one big reason SI started with farmed animal advocacy)), but I'm pretty compelled by the general claim, "If you think X value is what matters most in the long-term, your default approach should be working on X directly." Advocating for digital sentience is of course neglected territory, but Sentience Institute, the Nonhuman Rights Project, and Animal Ethics have all worked on it. People for the Ethical Treatment of Reinforcement Learners has been the only dedicated organization AFAIK, and I'm not sure what their status is or if they've ever paid full-time or part-time staff.

  • I think views on value lock-in matter a lot because of how they affect food tech (e.g. supporting The Good Food Institute). I place significant weight on this and a few other things (see this section of an SI page) that make me think GFI is actually a pretty good bet, despite my concern that technology progresses monotonically.

  • Because what might matter most is society's general concern for weird/small minds, we should be more sympathetic to indirect antispeciesism work like that done by Animal Ethics and the fundamental rights work of the Nonhuman Rights Project. From a near-term perspective, I don't think these look very good because I don't think we'll see fundamental rights be a big reducer of factory farm suffering.

  • This is a less-refined view of mine, but I'm less focused than I used to be on wild animal suffering. It just seems to cost a lot of weirdness points, and naturogenic suffering doesn't seem nearly as important as anthropogenic suffering in the far future. Factory farm suffering seems a lot more similar to far future dystopias than does wild animal suffering, despite WAS dominating utility calculations for the next, say, 50 years.

I could talk more about this if you'd like, especially if you're facing specific decisions like where exactly to donate in 2018 or what sort of job you're looking for with your skillset.

Comment by Jacy on Why I prioritize moral circle expansion over reducing extinction risk through artificial intelligence alignment · 2018-02-21T16:38:47.981Z · EA · GW

I'm sympathetic to both of those points personally.

1) I considered that, and in addition to time constraints, I know others haven't written on this because there's a big concern of talking about it making it more likely to happen. I err more towards sharing it despite this concern, but I'm pretty uncertain. Even the detail of this post was more than several people wanted me to include.

But mostly, I'm just limited on time.

2) That's reasonable. I think all of these boundaries are fairly arbitrary; we just need to try to use the same standards across cause areas, e.g. considering only work with this as its explicit focus. Theoretically, since Neglectedness is basically just a heuristic to estimate how much low-hanging fruit there is, we're aiming at "The space of work that might take such low-hanging fruit away." In this sense, Neglectedness could vary widely. E.g. there's limited room for advocating (e.g. passing out leaflets, giving lectures) directly to AI researchers, but this isn't affected much by advocacy towards the general population.

I do think moral philosophy that leads to expanding moral circles (e.g. writing papers supportive of utiltiarianism), moral-circle-focused social activism (e.g. anti-racism, not as much something like campaigning for increased arts funding that seems fairly orthogonal to MCE), and EA outreach (in the sense that the A of EA means a wide moral circle) are MCE in the broadest somewhat-useful definition.

Caspar's blog post is a pretty good read on the nuances of defining/utilizing Neglectedness.

Comment by Jacy on Why I prioritize moral circle expansion over reducing extinction risk through artificial intelligence alignment · 2018-02-21T13:54:31.961Z · EA · GW

That makes sense. If I were convinced hedonium/dolorium dominated to a very large degree, and that hedonium was as good as dolorium is bad, I would probably think the far future was at least moderately +EV.

Comment by Jacy on Why I prioritize moral circle expansion over reducing extinction risk through artificial intelligence alignment · 2018-02-21T13:50:57.015Z · EA · GW

Agreed.

Comment by Jacy on Why I prioritize moral circle expansion over reducing extinction risk through artificial intelligence alignment · 2018-02-21T04:13:56.166Z · EA · GW

Yeah, I think that's basically right. I think moral circle expansion (MCE) is closer to your list items than extinction risk reduction (ERR) is because MCE mostly competes in the values space, while ERR mostly competes in the technology space.

However, MCE is competing in a narrower space than just values. It's in the MC space, which is just the space of advocacy on what our moral circle should look like. So I think it's fairly distinct from the list items in that sense, though you could still say they're in the same space because all advocacy competes for news coverage, ad buys, recruiting advocacy-oriented people, etc. (Technology projects could also compete for these things, though there are separations, e.g. journalists with a social beat versus journalists with a tech beat.)

I think the comparably narrow space of ERR is ER, which also includes people who don't want extinction risk reduced (or even want it increased), such as some hardcore environmentalists, antinatalists, and negative utilitarians.

I think these are legitimate cooperation/coordination perspectives, and it's not really clear to me how they add up. But in general, I think this matters mostly in situations where you actually can coordinate. For example, in the US general election when Democrats and Republicans come together and agree not to give to their respective campaigns (in exchange for their counterpart also not doing so). Or if there were anti-MCE EAs with whom MCE EAs could coordinate (which I think is basically what you're saying with "we'd be better off if they both decided to spend the money on anti-malaria bednets").

Comment by Jacy on Why I prioritize moral circle expansion over reducing extinction risk through artificial intelligence alignment · 2018-02-21T00:55:20.253Z · EA · GW

Thanks for the comment! A few of my thoughts on this:

Presumably we want some people working on both of these problems, some people have skills more suited to one than the other, and some people are just going to be more passionate about one than the other.

If one is convinced non-extinction civilization is net positive, this seems true and important. Sorry if I framed the post too much as one or the other for the whole community.

Much of the work related to AIA so far has been about raising awareness about the problem (eg the book Superintelligence), and this is more a social solution than a technical one.

Maybe. My impression from people working on AIA is that they see it as mostly technical, and indeed they think much of the social work has been net negative. Perhaps not Superintelligence, but at least the work that's been done to get media coverage and widespread attention without the technical attention to detail of Bostrom's book.

I think the more important social work (from a pro-AIA perspective) is about convincing AI decision-makers to use the technical results of AIA research, but my impression is that AIA proponents still think getting those technical results is probably the more important projects.

There's also social work in coordinating the AIA community.

First, I expect clean meat will lead to the moral circle expanding more to animals. I really don't see any vegan social movement succeeding in ending factory farming anywhere near as much as I expect clean meat to.

Sure, though one big issue with technology is that it seems like we can do far less to steer its direction than we can do with social change. Clean meat tech research probably just helps us get clean meat sooner instead of making the tech progress happen when it wouldn't otherwise. The direction of the far future (e.g. whether clean meat is ever adopted, whether the moral circle expands to artificial sentience) probably matters a lot more than the speed at which it arrives.

Of course, this gets very complicated very quickly, as we consider things like value lock-in. Sentience Institute has a bit of basic sketching on the topic on this page.

Second, I'd imagine that a mature science of consciousness would increase MCE significantly. Many people don't think animals are conscious, and almost no one thinks anything besides animals can be conscious

I disagree that "many people don't think animals are conscious." I almost exclusively hear that view in from the rationalist/LessWrong community. A recent survey suggested that 87.3% of US adults agree with the statement, "Farmed animals have roughly the same ability to feel pain and discomfort as humans," and presumably even more think they have at least some ability.

Advanced neurotechnologies could change that - they could allow us to potentially test hypotheses about consciousness.

I'm fairly skeptical of this personally, partly because I don't think there's a fact of the matter when it comes to whether a being is conscious. I think Brian Tomasik has written eloquently on this. (I know this is an unfortunate view for an animal advocate like me, but it seems to have the best evidence favoring it.)

Comment by Jacy on How to get a new cause into EA · 2018-01-13T15:14:15.853Z · EA · GW

I'd go farther here and say all three (global poverty, animal rights, and far future) are best thought of as target populations rather than cause areas. Moreover, the space not covered by these three is basically just wealthy modern humans, which seems to be much less of a treasure trove than the other three because WMHs have the most resources, far more than the other three populations. (Potentially there's also medium-term future beings as a distinct population, depending on where we draw the lines.)

I think EA would probably be discovering more things if we were focused on looking not for new cause areas but for new specific intervention areas, comparable to individual health support for the global poor (e.g. antimalarial nets, deworming pills), individual financial help for the global poor (e.g. unconditional cash transfers), individual advocacy of plant-based eating (e.g. leafleting, online ads), institutional farmed animal welfare reforms (e.g. cage-free eating), technical AI safety research, and general extinction risk policy work.

If we think of the EA cause area landscape in "intervention area" terms, there seems to be a lot more change happening.

Comment by Jacy on Survey of leaders in the EA community on a range of important topics, like what skills they need and what causes are most effective · 2017-11-04T15:52:18.891Z · EA · GW

Thanks for the response. My main general thought here is just that we shouldn't depend on so much from the reader. Most people, even most thoughtful EAs, won't read in full and come up with all the qualifications on their own, so it's important for article writers to include those themselves, and to include those upfront and center in their articles.

If you wanted to spend a lot of time on "what causes do EA leadership favor," one project I see as potentially really valuable is a list of arguments/evidence and getting EA leaders to vote on their weights. Sort of a combination of 80k's quantitative cause assessment and this survey. I think this is a more ideal peer-belief-aggregation because it reduces the effects of dependence. Like if Rob and Jacy both prioritize the far future entirely because of Bostrom's calculation of how many beings could exist in it, then we'd come up with that single argument having a high weight, rather than two people highly favoring the far future. We might try this approach at Sentience Institute at some point, though right now we're more focused on just coming up with the lists of arguments/evidence in the field of moral circle expansion, so instead we did something more like your 2017 survey of researchers in this field. (Specifically, we would have researchers rate the pieces of evidence listed on this page: https://www.sentienceinstitute.org/foundational-questions-summaries)

That's probably not the best approach, but I'd like a survey approach that somehow tries to minimize the dependence effect. A simpler version would be to just ask for people's opinions but them have them rate how much they're basing their views on the views of their peers, or just ask for their view and confidence while pretending like they've never heard peer views, but this sort of approach seems more vulnerable to bias than the evidence-rating method.

Anyway, have fun at EAG London! Curious if anything that happens there really surprises you.

Comment by Jacy on Survey of leaders in the EA community on a range of important topics, like what skills they need and what causes are most effective · 2017-11-04T01:29:41.770Z · EA · GW

[Disclaimer: Rob, 80k's Director of Research, and I briefly chatted about this on Facebook, but I want to make a comment here because that post is gone and more people will see it here. Also, as a potential conflict-of-interest, I took the survey and work at an organization that's between the animal and far future cause areas.]

This is overall really interesting, and I'm glad the survey was done. But I'm not sure how representative of EA community leaders it really is. I'd take the cause selection section in particular with a big grain of salt, and I wish it were more heavily qualified and discussed in different language. Of the organizations surveyed and number surveyed per organization, my personal count is that 14 were meta, 12.5 were far future, 3 were poverty, and 1.5 were animal. My guess is that a similar distribution holds for the 5 unaffiliated respondents. So it should be no surprise to readers that meta and far future work were most prioritized.* **

I think we shouldn't call this a general survey of EA leadership (e.g. the title of the post) when it's so disproportionate. I think the inclusion of more meta organization makes sense, but there are poverty groups like the Against Malaria Foundation and Schistosomiasis Control Initiative, as well as animal groups like The Good Food Institute and The Humane League, that seem to meet the same bar for EA-ness as the far future groups included like CSER and MIRI.

Focusing heavily on far future organizations might be partly due to selecting only organizations founded after the EA community coalesced, and while that seems like a reasonable metric (among several possibilities), is also seems biased towards far future work because that's a newer field and it's at least the reasonable metric that conveniently syncs up with 80k's cause prioritization views. Also, the ACE-recommended charity GFI was founded explicitly on the principle of effective altruism after EA coalesced. Their team says that quite frequently, and as far as I know, the leadership all identifies as EA. Perhaps you're using a metric more like social ties to other EA leaders, but that's exactly the sort of bias I'm worried about here.

Also, the EA community as a whole doesn't seem to hold this cause prioritization view (http://effective-altruism.com/ea/1e5/ea_survey_2017_series_cause_area_preferences/). Leadership can of course deviate from the broad community, but this is just another reason to be cautious in weighing these results.

I think your note about this selection is fair

  • "the group surveyed included many of the most clever, informed and long-involved people in the movement,"

and I appreciate that you looked a little at cause prioritization for relatively-unbiased subsets

  • "Views were similar among people whose main research work is to prioritise different causes – none of whom rated Global Development as the most effective,"
  • "on the other hand, many people not working in long-term focussed organisations nonetheless rated it as most effective"

but it's still important to note that you (Rob and 80k) personally favor these two areas strongly, which seems to create a big potential bias, and that we should be very cautious of groupthink in our community where updating based on the views of EA leaders is highly prized and recommended. I know the latter is a harder concern to get around with a survey, but I think it should have been noted in the report, ideally in the Key Figures section. And as I mentioned at the beginning, I don't think this should be discussed as a general survey of EA leaders, at least not when it comes to cause prioritization.

This post certainly made me more worried personally that my prioritization of the far future could be more due to groupthink than I previously thought.


Here's the categorization I'm using for organizations. It might be off, but it's at least pretty close. ff = far future

80,000 Hours (3) meta AI Impacts (1) ff Animal Charity Evaluators (1) animal Center for Applied Rationality (2) ff Centre for Effective Altruism (3) meta Centre for the Study of Existential Risk (1) ff Charity Science: Health (1) poverty DeepMind (1) ff Foundational Research Institute (2) ff Future of Humanity Institute (3) ff GiveWell (2) poverty Global Priorities Institute (1) meta Leverage Research (1) meta Machine Intelligence Research Institute (2) ff Open Philanthropy Project (5) meta Rethink Charity (1) meta Sentience Institute (1) animal/ff Unaffiliated (5)

*The 80k post notes that not everyone filled out all the survey answers, e.g. GiveWell only had one person fill out the cause selection section.

**Assuming the reader has already seen other evidence, e.g. that CFAR only recently adopted a far future mission, or that people like Rob went from other cause areas towards a focus on the far future.

Comment by Jacy on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-28T14:34:35.649Z · EA · GW

Another (possibly bad, but want to put it out there) solution is to list names of people who downvoted. That of course has downsides, but it would have more accountability, especially when it comes to my suspicion that it's a few people doing a lot of the downvoting against certain people/ideas.

Another is to have downvotes 'cost' karma, e.g. if you have 500 total karma, that allows you to make 50 downvotes.

Comment by Jacy on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-28T14:18:19.970Z · EA · GW

Yeah, I'm totally onboard with all of that, including the uncertainty.

My view on downvoting is less that we need to remove it, and more that the status quo is terrible and we should be trying really hard to fix it.

Comment by Jacy on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-28T14:11:28.124Z · EA · GW

Yeah, I don't think downvotes are usually the best way of addressing bad arguments in the sense that someone is making a logical error, mistaken about an assumption, missing some evidence, etc. Like in this thread, I think that's leading to dogpiling, groupthink, and hostility in a way that outweighs the benefit of downvoting from flagging bad arguments when thoughtful people don't have time to flag them via a thoughtful comment.

I think downvotes are mostly just good for bad comments in the sense that someone is purposefully lying, relying on personal attacks instead of evidence, or otherwise not abiding by basic norms of civil discourse. In these cases, I don't think the downvoting comes off as nearly as hostile.

If you agree with that, then we must just disagree on whether examples (like my downvoted comment above) are bad arguments or bad comments. I think the community does pretty often downvote stuff it shouldn't.

Comment by Jacy on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-27T13:28:20.141Z · EA · GW

Another concrete suggestion: I think we should stop having downvotes on the EA Forum. I might be not appreciating some of the downsides of this change, but I think they are small compared to the big upside of mitigating the toxic/hostile/dogpiling/groupthink environment we currently seem to have.

When I've brought this up before, people liked the idea, but it never got discussed very thoroughly or implemented.

Edit: Even this comment seems to be downvoted due to disagreement. I don't think this is helpful.

Comment by Jacy on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-27T13:25:18.414Z · EA · GW

For what it's worth, I think if you had instead commented with: "As a newcomer to this community, I see very little evidence that EA prizes accuracy more than average. This seems contrary to its goals, and makes me feel sad and unwelcome," (or something similar that politely captures what you mean) that would have been a valuable contribution to the discussion.

That being said, you might have still gotten downvoted. People's downvoting behavior on this forum is really terrible and a huge area for improvement in online EA discourse.

Comment by Jacy on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-27T13:20:53.416Z · EA · GW

I wouldn't concern yourself much with downvotes on this forum. People use downvotes for a lot more than the useful/not useful distinction they're designed for (most common other reason is to just signal against views they disagree with when they see an opening). I was recently talking to someone about what big improvements I'd like to see in the EA community's online discussion norms, and honestly if I could either remove bad comment behavior or remove bad liking/voting behavior, it'd actually be the latter.

To put it another way, though I'm still not sure exactly how to explain this, I think no downvotes and one thoughtful comment explaining why your comment is wrong (and no upvotes on that comment) should do more to change your mind than a large number of downvotes on your comment.

I'm really still in favor of just removing downvotes from this forum, since this issue has been so persistent over the years. I think there would be downsides, but the hostile/groupthink/dogpiling environment that the downvoting behavior facilitates is just really really terrible.