# The Future Might Not Be So Great

post by Jacy · 2022-06-30T13:01:21.617Z · EA · GW · 118 comments

This is a link post for https://www.sentienceinstitute.org/blog/the-future-might-not-be-so-great

## Contents

  Summary
Arguments on the Expected Value (EV) of Human Expansion
Argument Name
Description
Arguments for Positive Expected Value (EV)
Historical Progress
Value Through Intent
Value Through Evolution
Convergence of Patiency and Agency
Reasoned Cooperation
Discoverable Moral Reality
Arguments for Negative Expected Value (EV)
Historical Harms
Disvalue Through Intent
Disvalue Through Evolution
Divergence of Patiency and Agency
Threats
Arguments that May Increase or Decrease Expected Value (EV)
Conceptual Utility Asymmetry
Empirical Utility Asymmetry
Complexity Asymmetry
Procreation Asymmetry
EV of the Counterfactual
The Nature of Digital Minds, People, and Sentience
Life Despite Suffering
The Nature of Value Refinement
Scaling of Value and Disvalue
EV of Human Expansion after Near-Extinction or Other Events
The Zero Point of Value
Related Work
Terminology
What Does the EV Need to be to Prioritize Extinction Risks?
Time-Sensitivity
Biases
Future Research on the EV of Human Expansion
References
None


Many thanks for feedback and insight from Kelly Anthis, Tobias Baumann, Jan Brauner, Max Carpendale, Sasha Cooper, Sandro Del Rivo, Michael Dello-Iacovo, Michael Dickens, Anthony DiGiovanni, Marius Hobbhahn, Ali Ladak, Simon Knutsson, Greg Lewis, Kelly McNamara, John Mori, Thomas Moynihan, Caleb Ontiveros, Sean Richardson, Zachary Rudolph, Manny Rutinel, Stefan Schubert, Michael St. Jules, Nell Watson, Peter Wildeford, and Miranda Zhang. This essay is in part an early draft of an upcoming book chapter on the topic, and I will add the citation here when it is available.

Our lives are not our own. From womb to tomb, we are bound to others, past and present. And by each crime and every kindness, we birth our future. ⸻ Cloud Atlas (2012)

# Summary

The prioritization of extinction risk reduction depends on an assumption that the expected value (EV)[1] of human survival and interstellar colonization is highly positive. In the feather-ruffling spirit of EA Criticism and Red Teaming [? · GW], this essay lays out many arguments for a positive EV and a negative EV. This matters because, insofar as the EV is lower than we previously believed, we should shift some longtermist [? · GW] resources away from the current focus on extinction risk reduction. Extinction risks are the most extreme category of population risks, which are risks to the number of individuals in the long-term future. We could shift resources towards the other type of long-term risk, quality risks, which are risks to the moral value of individuals in the long-term future, such as whether they experience suffering or happiness [? · GW].[2] Promising approaches to improve the quality of the long-term future include some forms of AI safety [? · GW], moral circle expansion [? · GW], cooperative [? · GW] game theory [? · GW], digital minds [? · GW], and global priorities [? · GW] research. There may be substantial overlap with extinction risk reduction approaches, but in this case and in general, much more research is needed. I think that the effective altruism (EA) emphasis on existential risk could be replaced by a mindset of existential pragmatism:

Rather than ensuring humanity expands its reach throughout the universe, we must ensure that the universe will be better for it.

I have spoken to many longtermist EAs about this crucial consideration [? · GW], and for most of them, that was their first time explicitly considering the EV of human expansion.[3] My sense is that many more are considering it now, and the community is growing more skeptical of highly positive EV as the correct estimate. I’m eager to hear more people’s thoughts on the all-things-considered estimate of EV, and I discuss the limited work done on this topic to date in the “Related Work” section.

In the following table, I lay out the object-level arguments on the EV of human expansion, and the rest of the essay details meta-considerations (e.g., option value). The table also includes the strongest supporting arguments that increase the evidential weight of their corresponding argument and the strongest counterarguments that reduce the weight. The arguments are not mutually exclusive and are merely intended as broad categories that reflect the most common and compelling arguments for at least some people (not necessarily me) on this topic. For example, Historical Progress and Value Through Intent have been intertwined insofar as humans intentionally create progress, so users of this table should be mindful that they do not overcount (e.g., double count) the same evidence. I handle this in my own thinking by splitting an overlapping piece of evidence among its categories in proportion to a rough sense of fit in those categories.[4]

In the associated spreadsheet, I list my own subjective evidential weight scores where positive numbers indicate evidence for +EV and negative numbers indicate evidence for -EV. It is helpful to think through these arguments with different assignment and aggregation methods, such as linear or logarithmic scaling. With different methodologies to aggregate my own estimates or those of others, the total estimate is highly negative around 30% of the time, weakly negative 40%, and weakly positive 30%. It is almost never highly positive. I encourage people to make their own estimates, and while I think such quantifications are usually better than intuitive gestalts, all such estimates should be taken with golf balls of salt.[5]

This is an atypical structure for an argumentative essay—laying out all the arguments, for and against, instead of laying out arguments for my position and rebutting the objections—but I think that we should detach argumentation from evaluation. I’m not aiming for maximum persuasiveness. Indeed, the thrust of my critique is that EAs have failed to consider these arguments in such a systematic way, either neglecting the assumption entirely or selecting only a handful of the multitude of evidence and reason we have available. Overall, my current thinking (primarily an average of several aggregations of quantified estimates and Aumann updating on others’ views) is that the EV of human expansion is not highly positive. For this and other reasons [EA(p) · GW(p)], I prioritize improving the quality of the long-term future rather than increasing its expected population.

# Related Work

Some of our successors might live lives and create worlds that, though failing to justify past suffering, would give us all, including some of those who have suffered, reasons to be glad that the Universe exists. ⸻ Derek Parfit (2017)

The field of existential risk has intellectual roots as deep as human history in notions of “apocalypse” such as the end of the Mayan calendar. Thomas Moynihan (2020) distinguishes apocalypse as having a sense to it or a justification, such as the actions of a supernatural deity, while “extinction” entails “the ending of sense” entirely. This notion of human extinction is traced back only to the Enlightenment beginning in the 1600s, and its most well-known articulation in the 21st century is under the category of existential risks (also known as x-risks), a term coined in 2002 by philosopher Nick Bostrom for risks “where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.”

The most famous essay on existential risk is “Astronomical Waste” (Bostrom 2003), in which Bostrom argues that if humanity could colonize the Virgo supercluster, the massive concentration of galaxies that includes our own Milky Way and 47,000 of its neighbors, then we could sustain approximately 10^38 human beings, an intuitively inconceivably large number. Bostrom argues that the priority of utilitarians should be to reduce existential risk and ensure we seize this cosmic endowment, though the leap from the importance of the long-term future to existential risk reduction is contentious (e.g., Beckstead 2013b [? · GW]). The field of existential risk studies has risen at pace with the growth of effective altruism (EA), with a number of seminal works summarizing and advancing the field (Matheny 2007; Bostrom 2012; Beckstead 2013a; Bostrom 2013; 2014; Tegmark 2017; Russell 2019; Moynihan 2020; Ord 2020; MacAskill forthcoming).

Among existential risks, EAs have largely focused on population risks (particularly extinction risks); the term “x-risk,” which canonically refers to existential risk, is often interpreted as extinction risk (see Aird 2020a [EA · GW]). A critical assumption underlying this focus has been that the expected value of humanity’s survival and interstellar colonization is very high.. This assumption largely goes unstated, but it was briefly acknowledged in Beckstead (2013a):

Is the expected value of the future negative? Some serious people—including Parfit (2011, Volume 2, chapter 36), Williams (2006), and Schopenhauer (1942)—have wondered whether all of the suffering and injustice in the world outweigh all of the good that we've had. I tend to think that our history has been worth it, that human well-being has increased for centuries, and that the expected value of the future is positive. But this is an open question, and stronger arguments pointing in either direction would be welcome.

Christiano (2013) asked, “Why might the future be good?” though, as I understood it, that essay did not mention the possibility of a negative future. I had also implicitly accepted the assumption of a good future until 2014, when I thought through the evidence and decided to prioritize moral circle expansion at the intersection of animal advocacy and longtermism (Anthis 2014). I brought it up on the old EA Forum in Anthis (2016a) [EA · GW], and West (2017) [EA · GW] detailed a version of the “Value Through Intent” argument. I also remember extensive Facebook threads around this time, though I do not have links to share. I finally wrote up my thoughts on the topic in detail in Anthis (2018b) [EA · GW] as part of a prioritization argument for moral circle expansion over decreasing extinction risk through AI alignment, and this essay is a follow-up to and refinement of those ideas.

Later in 2018, Brauner and Grosse-Holz (2018) [EA · GW] published an EA Forum essay arguing that the expected value of extinction risk reduction is positive. In my opinion, it failed to consider many of the arguments on the topic, as discussed in EA Forum comments and a rebuttal, also on the EA Forum, DiGiovanni (2021) [EA · GW]. There is also a chapter in MacAskill (forthcoming) covering similar ground as Brauner and Grosse-Holz, with similar arguments missing, in my opinion. Overall, these writings primarily focus on three arguments:

1. the “Value Through Intent” or “will” argument [EA(p) · GW(p)], that insofar as humanity exerts its will, we tend to produce value rather than disvalue;
2. the likelihood that factory farming and wild animal suffering, the largest types of suffering today, will persist into the far future; and
3. axiological considerations, particularly the population ethics question of whether creating additional beings with positive welfare is morally good. This has been the main argument against increasing population from some negative utilitarians and other “suffering-focused” EAs, such as the Center on Long-Term Risk (CLR) [? · GW] and Center for Reducing Suffering (CRS) [? · GW], since Tomasik (2006).

These are three important considerations, but they cover only a small portion of the total landscape of evidence and reason that we have available for estimating the EV of human expansion. For transparency, I should flag that at least some of the authors would disagree with me about this critique of their work.

Overall, I think the arguments against a highly positive EV of human expansion have been the most important blindspot of the EA community to date. This is the only major dissenting opinion I have with the core of the EA memeplex. I would guess over 90% of longtermist EAs with whom I have raised this topic have never considered it before, despite acknowledging during our conversation that the expected value being highly positive is a crucial assumption for prioritizing extinction risk and that it is on shaky ground—if not deciding that it is altogether mistaken. (Of course, this is not meant as a representative sample of all longtermist EAs.) While examining this assumption and deciding that the far future is not highly positive would not completely overhaul longtermist EA priorities, I tentatively think that it would significantly change our focus. In particular, we should shift resources away from extinction risk and towards quality risks, including more global priorities research to better understand this and other crucial considerations. I would be eager for more discussion of this topic, and the sort of evidence I expect to most change my mind is the cooperative game theory research done by CLR [? · GW], the Center on Human-Compatible AI (CHAI) [? · GW], and others in AI safety; the moral circle expansion and digital minds research done by Sentience Institute (SI) [? · GW], Future of Humanity Institute (FHI) [? · GW], and others in longtermism and AI safety; and all sorts of exploration of concrete scenarios similar to The Age of Em (Hanson 2016) and AI takeoff “training stories” (Hubinger 2021) [AF · GW]. I expect fewer updates from more conceptual discourse like the works cited above on the EA Forum and this essay, but I still see them as valuable contributions. See further discussion in the “Future Research on the EV of Human Expansion” subsection below.

## Terminology

I separate the moral value of the long-term future into two factors: population, the number of individuals at each point in time, and quality, the moral value of each individual’s existence at each point in time. The moral value of the long-term future is thus the double sum of quality across individuals across time. Risks to the number of individuals (living sufficiently positive lives) are population risks, and risks to the quality of each individual life are quality risks.

Extinction risks are a particular sort of population risk, those that would “annihilate Earth-originating intelligent life,” though I would also include threats towards populations of non-Earth-originating and non-intelligent (and perhaps even non-living) individuals who matter morally, and I get the sense that others have also favored this more inclusive definition. Non-existential population risks could be a permanent halving of the population or a delay of one-third the universe’s remaining lifetime in humanity’s interstellar expansion, though there is no consensus on where exactly the cutoff is between existential and non-existential, though there does seem to be consensus that extinction of humans (with no creation of post-humans, such as whole brain emulations [? · GW]) is existential.

Quality risks are risks to the moral value of individuals who may exist in the long-term future. Existential quality risks are those that “permanently and drastically curtail its potential” moral value, such as all individuals being moved from positive to zero or positive to negative value. Non-existential quality risks may include one-tenth of the future population dropping from highly positive to barely positive quality, one-fourth of the future population dropping from barely positive to barely negative quality, and so on. Again, this may be better understood as a spectrum of existentiality, rather than two neatly separated categories, because it is unclear at what point potential is permanently and drastically curtailed. Quality risks include suffering risks (also known as_ s-risks_), “risks of events that bring about suffering in cosmically significant amounts” (Althaus and Gloor 2016; Tomasik 2011), which was noted as “weirdly sidelined” by total utilitarians in Rowe’s (2022) “Critiques of EA that I Want to Read.” [EA · GW]

These categories are not meant to coincide with the existential risk taxonomies of Bostrom (2002) (bangs, crunches, shrieks, whimpers) or Bostrom (2013) (human extinction, permanent stagnation, flawed realization, subsequent ruination), in part because those are worded in terms of positive potential rather than an aggregation of positive and negative outcomes. However, one can reasonably view some of those categories (e.g., shrieks and failed realizations) as including some positive, zero, or negative quality trajectories because they have a failed realization of positive potential. Aird (2020b) [EA · GW] has some useful Venn diagrams of the overlaps of some long-term risks.

The term “trajectory change” [? · GW] has variously been used as a category that, from my understanding, includes the mitigation or exacerbation of all of the risks above, such as Beckstead’s (2013a) definition of trajectory changes as actions that “slightly or significantly alter the world’s development trajectory.”

# What Does the EV Need to be to Prioritize Extinction Risks?

Explosive forces, energy, materials, machinery will be available upon a scale which can annihilate whole nations. Despotisms and tyrannies will be able to prescribe the lives and even the wishes of their subjects in a manner never known since time began. If to these tremendous and awful powers is added the pitiless sub-human wickedness which we now see embodied in one of the most powerful reigning governments, who shall say that the world itself will not be wrecked, or indeed that it ought not to be wrecked? There are nightmares of the future from which a fortunate collision with some wandering star, reducing the earth to incandescent gas, might be a merciful deliverance. ⸻ Winston Churchill (1931)

Under the standard definition of utility, you should take actions with positive expected value (EV), not take actions with negative EV, and it doesn’t matter if you take actions with zero EV. However, prioritization is plausibly much more complicated than this. Is the EV of the action higher than counterfactual actions? Is EV the right approach for imperfect individual decision-makers? Is EV the right approach for a group of people working together? What is the track record for EV decision-making relative to other approaches? Etc. There are many different views that a reasonable person can come to on how best to navigate these conceptual and empirical questions, but I believe that the EV needs to be highly positive to prioritize extinction risks.

As I discussed in Anthis (2018b) [EA · GW], I think an intuitive but mistaken argument on this topic is that if we are uncertain about the EV or expect it is close to zero, we should favor reducing extinction risk to preserve option value. Fortunately I have heard this argument much less frequently in recent years, but it is still in a drop-down section of 80,000 Hours’ “The Case for Reducing Existential Risks.” This reasoning seems mistaken for two reasons:

First, option value is only good insofar as we have control over the exercising of future options or expect those who have control to exercise it well. In the course of human civilization, even the totality of the EA movement has relatively little control over humanity’s actions—though arguably a lot more than most measures would make it appear due to our strategic approach, particularly targeting high-leverage domains such as advanced AI—and it is unclear that EA will retain even this modest level of control. The argument that option value is good because our descendants will use it well is circular because the case against extinction risk reduction is primarily focused on humanity not using its options well (i.e., humanity not using its options well is both the premise and the conclusion). An argument that relies on the claim that is being contested is very limited. However, we have more control if one thinks extinction timelines are very short and, if one survives, they and their colleagues will have substantial control over humanity’s actions; we also may be optimistic about human action despite being pessimistic about the future if we think nonhuman forces such as aliens and evolution are the decisive drivers of long-term disvalue.

Second, continued human existence very plausibly limits option value in similar ways to nonexistence. Whether we are in a time of perils or not, there is no easy “off switch” for which humanity can decide to let itself go extinct, especially with advanced technologies (e.g., spreading out through von Neumann probes). It is not as if we can or should reduce extinction risk in the 2020s then easily raise it in the 2030s based on further global priorities research. Still, there is a greater variety of non-extinct than extinct civilizations, so insofar as we want to preserve a wide future of possibilities, that is reason to favor extinction risk reduction.

Instead of option value, the more important considerations to me are (i) that we have other promising options with high EV such that extinction risk reduction needs to be more positive than these other options in order to justify prioritization and (ii) that we should have some risk aversion and sandboxing of EV estimates such that we should sometimes treat close-to-zero values as zero. It’s also unclear how to weigh the totality of evidence here, but insofar as it is weak and speculative—as with most questions about the long-term future—one may pull their estimate towards a prior, though it is unclear what that prior should be. If one thinks zero is a particularly common answer in an appropriate reference class, that could be reasonable, but it depends on many factors beyond the scope of this essay.

# Time-Sensitivity

If we are allocating resources to both population and quality risks, one could argue that we should spend resources on population risks first because the quality of individual lives only matters insofar as those individuals exist. The opposite is true as well: For example, if a quality of zero were locked in for the long-term future, then increasing or decreasing the population would have no moral value or disvalue. Outcomes of exactly zero quality might seem less likely than outcomes of exactly zero population, though this depends on the “EV of the Counterfactual” (e.g., life originating on other planets) and is more contentious for close-to-zero quantities.

As with option value, the future depends on the past, so for every year that passes, the future has fewer degrees of freedom. This is most apparent in the development of advanced AI, in which its development may hinge on early-stage choices, such as selecting training regimes that are more likely to lead to its alignment with its designers’ value or selecting those values with which to align the AI (i.e., value lock-in [? · GW]). In general, there are strong arguments for time-sensitivity for both types of trajectory change, especially with advanced technology—also life extension [? · GW] and von Neumann probes in particular.

# Biases

To our amazement we suddenly exist, after having for countless millennia not existed; in a short while we will again not exist, also for countless millennia. That cannot be right, says the heart. ⸻ Arthur Schopenhauer (1818, translation 2008)

We could be biased towards optimism or pessimism. Among the demographics of EA, I think that we should probably be more worried about bias towards optimism. Extreme suffering, as described by Tomasik (2006), is a topic that people are very tempted to ignore, downplay, or rationalize (Cohen 2001). In general, the prospect of future dystopias is uncomfortable and unpleasant to think about. Most of us dread the possibility that our legacy in the universe could be a tragic one, and such a gloomy outlook does not resonate with favored trends of techno-optimism or the heroic notion of saving humanity from extinction. However, the sign of this bias can be flipped, such as in social groups where pessimism and doomsaying is in vogue. My experience is that people in EA and longtermism tend to be much more ready to dismiss pessimism and suffering-focused ethics than optimism and happiness-focused ethics, especially based on superficial claims that pessimism is driven by the personal dispositions and biases of its proponents. For a more detailed discussion on biases related to (not) prioritizing suffering, see Vinding (2020).

Additionally, given the default approach to longtermism and existential risk is to reduce extinction risk, and there has already been over a decade of focus on that, we should be very concerned about status quo bias [? · GW] and the incentive structure of EA as it is today. This is one reason to encourage self-critique as individuals and as a community, such as with the Criticism and Red-Teaming Contest [? · GW]. That contest is one reason I wrote this essay, though I had already committed to writing a book chapter on this topic before the contest was announced.

I think we should focus more on the object-level arguments than on biases, but given how our answer to this question hinges on our intuitive estimates of extremely complicated figures, bias is probably more important than normal. I further discussed the merits of considering bias and listed many possible biases towards both moral circle expansion and reducing extinction risk through AI alignment in Anthis (2018b) [EA · GW].

One conceptual challenge is that a tendency towards pessimism or optimism could either be accounted for as a bias that needs correction or as a fact about the relative magnitudes of value and disvalue. On one hand, we might say that the importance of disvalue in evolution (e.g. the constant danger of one misstep curtailing all future spread of one’s genes) has made us care more about suffering than we should. On the other hand, we might say that it is a fact about how disvalue tends to be more common, subjectively worse, or objectively worse in the universe.

# Future Research on the EV of Human Expansion

Because most events in the long-term future entail some sort of value or disvalue, most new information from longtermist research provides some evidence on the EV of human expansion. As stated above, I’m particularly excited about cooperative game theory research (e.g., CLR [? · GW], CHAI [? · GW]), moral circle expansion and digital minds research (e.g., SI [? · GW], FHI [? · GW]), and exploration of concrete trajectories (e.g., Hanson 2016; Hubinger 2021 [AF · GW]). I’m relatively less excited (though still excited!), on the margin, by entirely armchair taxonomization and argumentation like that in this essay. That includes research on axiological asymmetries, such as more debate on suffering-focused ethics [? · GW] or population ethics [? · GW], though these can be more useful for other topics and perhaps other people considering this question. My lack of enthusiasm is largely because in the past 8 years of having this view that the EV of human expansion is not highly positive, very little of the new evidence has come from armchair reasoning and argumentation, despite that being more common (although what sort of research is most common depends on where one draws the boundaries because, again, so much research has implications for EV).

In general, this is such an encompassing, big-picture topic that empirical data is extremely limited relative to scope, and it seems necessary to rely on qualitative intuitions, quantitative intuitions, or back-of-the-envelope calculations a la Dickens’ (2016) “A Complete Quantitative Model for Cause Selection” [EA · GW] or Tarsney’s (2022) “The Epistemic Challenge to Longtermism.” I would like to see a more systematic survey of such intuitions, ideally from 5-30 people who have read through this essay and the “Related Work.” Ideally these would be stated as credible intervals or similar probability distributions, such that we can more easily quantify uncertainty in the overall estimate. As with all topics, I think we should Aumann update on each other’s views, a process in which I split the difference between my belief and someone else’s even if I do not know all the prior and posterior evidence on which they base their view. Of course, this is messy in the real world, for instance because we presumably should account not just for the few people with whom we happen to know their beliefs, but also for our expectations of the many people who also have a belief and even hypothetical people who could have a belief (e.g., unbiased versions of real-world people). It is also unclear whether normative (e.g., moral) views constitute the sort of belief that should be updated in this way, such as between people with fundamentally different value trade-offs between happiness and suffering.[7] There are cooperative reasons [? · GW] to deeply account for others’ views, and one may choose to account for moral uncertainty [? · GW].[6] In general, I would be very interested in a survey that just asks for numbers like those in the table above and allows us to aggregate those beliefs in a variety of ways; a more detailed case for how that aggregation should work is beyond the scope of this essay.

If you are persuaded by the arguments that the expected value of human expansion is not highly positive or that we should prioritize the quality of the long-term future, promising approaches include research, field-building, and community-building, such as at the Center on Long-Term Risk [? · GW], Center for Reducing Suffering [? · GW], Future of Humanity Institute [? · GW], Global Catastrophic Risk Institute [? · GW], Legal Priorities Project [? · GW], and Open Philanthropy [? · GW], and Sentience Institute [? · GW], as well as working at other AI safety and EA organizations with an eye towards ensuring that, if we survive, the universe is better for it. Some of this work has substantial room for more funding, and related jobs can be found at these organizations’ websites and on the 80,000 Hours job board.

# References

Aird, Michael. 2020a. “Clarifying Existential Risks and Existential Catastrophes.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/skPFH8LxGdKQsTkJy/clarifying-existential-risks-and-existential-catastrophes [EA · GW].

———. 2020b. “Venn Diagrams of Existential, Global, and Suffering Catastrophes.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/AJbZ2hHR4bmeZKznG/venn-diagrams-of-existential-global-and-suffering [EA · GW].

Alighieri, Dante. 1307. Convivo. https://www.loebclassics.com/view/marcus_tullius_cicero-de_finibus_bonorum_et_malorum/1914/pb_LCL040.41.xml.

Althaus, David, and Tobias Baumann. 2020. “Reducing Long-Term Risks from Malevolent Actors.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/LpkXtFXdsRd4rG8Kb/reducing-long-term-risks-from-malevolent-actors [EA · GW].

Althaus, David, and Lukas Gloor. 2016. “Reducing Risks of Astronomical Suffering: A Neglected Priority.” Center on Long-Term Risk. https://longtermrisk.org/reducing-risks-of-astronomical-suffering-a-neglected-priority/.

Anthis, Jacy Reese. 2014. “How Do We Reliably Impact the Far Future?” The Best We Can. https://web.archive.org/web/20151106103159/http://thebestwecan.org/2014/07/20/how-do-we-reliably-impact-the-far-future/.

———. 2016a. “Some Considerations for Different Ways to Reduce X-Risk.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/NExT987oY5GbYkTiE/some-considerations-for-different-ways-to-reduce-x-risk [EA · GW].

———. 2016b. “Why Animals Matter for Effective Altruism.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/ch5fq73AFn2Q72AMQ/why-animals-matter-for-effective-altruism [EA · GW].

———. 2018a. The End of Animal Farming: How Scientists, Entrepreneurs, and Activists Are Building an Animal-Free Food System. Boston: Beacon Press.

———. 2018b. “Why I Prioritize Moral Circle Expansion Over Artificial Intelligence Alignment.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/BY8gXSpGijypbGitT/why-i-prioritize-moral-circle-expansion-over-artificial [EA · GW].

———. 2018c. “Animals and the Far Future.” EAGxAustralia. https://www.youtube.com/watch?v=NTV81NZSuKw.

———. 2022. “Consciousness Semanticism: A Precise Eliminativist Theory of Consciousness.” In Biologically Inspired Cognitive Architectures 2021, edited by Valentin V. Klimov and David J. Kelley, 1032:20–41. Studies in Computational Intelligence. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-96993-6_3.

Anthis, Jacy Reese, and Eze Paez. 2021. “Moral Circle Expansion: A Promising Strategy to Impact the Far Future.” Futures 130: 102756. https://doi.org/10.1016/j.futures.2021.102756.

Askell, Amanda, Yuntao Bai, Anna Chen, et al. “A General Language Assistant as a Laboratory for Alignment.” ArXiv. https://arxiv.org/abs/2112.00861.

Beckstead, Nick. 2013a. “On the Overwhelming Importance of Shaping the Far Future.” Rutgers University. https://doi.org/10.7282/T35M649T.

Benatar, David. 2006. Better Never to Have Been: The Harm of Coming into Existence. New York: Clarendon Press.

Bostrom, Nick. 2002. “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards.” Journal of Evolution and Technology 9. https://ora.ox.ac.uk/objects/uuid:827452c3-fcba-41b8-86b0-407293e6617c.

———. 2003. “Astronomical Waste: The Opportunity Cost of Delayed Technological Development.” Utilitas 15 (3): 308–14. https://doi.org/10.1017/S0953820800004076.

———. 2003. “Moral uncertainty – towards a solution?” Overcoming Bias. https://www.overcomingbias.com/2009/01/moral-uncertainty-towards-a-solution.html.

———. 2012. Global Catastrophic Risks. Repr. Oxford: Oxford University Press.

———. 2013. “Existential Risk Prevention as Global Priority.” Global Policy 4 (1): 15–31. https://doi.org/10.1111/1758-5899.12002.

———. 2014. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press.

Bradbury, Ray. 1979. “Beyond 1984: The People Machines.” In Yestermorrow: Obvious Answers to Impossible Futures.

Brauner, Jan M., and Friederike M. Grosse-Holz. 2018. “The Expected Value of Extinction Risk Reduction Is Positive.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/NfkEqssr7qDazTquW/the-expected-value-of-extinction-risk-reduction-is-positive?fbclid=IwAR2Si8qdOEqXdPujDfv6gDGLaTdevs4Tb_CALW0D2MHUC4Ot9evEAoem3Gw [EA · GW].

Christiano, Paul. 2013. “Why Might the Future Be Good?” Rational Altruist. https://rationalaltruist.com/2013/02/27/why-will-they-be-happy/.

Churchill, Winston. 1931. “Fifty Years Hence. https://www.nationalchurchillmuseum.org/fifty-years-hence.html.

Cowen, Tyler. 2018. Stubborn Attachments: A Vision for a Society of Free, Prosperous, and Responsible Individuals.

Crootof, Rebecca. 2019. “'Cyborg Justice' and the Risk of Technological-Legal Lock-In.” _119 Columbia Law Review Forum _233.

Deutsch, David. 2011. The Beginning of Infinity: Explanations That Transform the World. London: Allen Lane.

Dickens, Michael. 2016. “A Complete Quantitative Model for Cause Selection.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/fogJKYXvqzkr9KCud/a-complete-quantitative-model-for-cause-selection [EA · GW].

DiGiovanni, Anthony. 2021. “A Longtermist Critique of ‘The Expected Value of Extinction Risk Reduction Is Positive.’” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/RkPK8rWigSAybgGPe/a-longtermist-critique-of-the-expected-value-of-extinction-2 [EA · GW].

Gloor, Lukas. 2017. “Tranquilism.” Center on Long-Term Risk. https://longtermrisk.org/tranquilism/.

———. 2018. “Cause Prioritization for Downside-Focused Value Systems.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/225Aq4P4jFPoWBrb5/cause-prioritization-for-downside-focused-value-systems [EA · GW].

Greaves, Hilary, and Will MacAskill. 2017. “A Research Agenda for the Global Priorities Institute.” https://globalprioritiesinstitute.org/wp-content/uploads/GPI-Research-Agenda-December-2017.pdf.

Hanson, Robin. 2016. The Age of Em: Work, Love, and Life When Robots Rule the Earth. First Edition. Oxford: Oxford University Press.

Harris, Jamie. 2019. “How Tractable Is Changing the Course of History?” Sentience Institute. http://www.sentienceinstitute.org/blog/how-tractable-is-changing-the-course-of-history.

Hobbhan, Marius, Eric Landgrebe, and Beth Barnes. “Reflection Mechanisms as an Alignment target: A Survey.” LessWrong. https://www.lesswrong.com/posts/XyBWkoaqfnuEyNWXi/reflection-mechanisms-as-an-alignment-target-a-survey-1 [LW · GW].

Hubinger, Evan. 2021. “How Do We Become Confident in the Safety of a Machine Learning System? - AI Alignment Forum.” AI Alignment Forum. https://www.alignmentforum.org/posts/FDJnZt8Ks2djouQTZ/how-do-we-become-confident-in-the-safety-of-a-machine [AF · GW].

Knutsson, Simon. 2017. “Reply to Shulman’s ‘Are Pain and Pleasure Equally Energy-Efficient?’” http://www.simonknutsson.com/reply-to-shulmans-are-pain-and-pleasure-equally-energy-efficient/.

MacAskill, William. Forthcoming (2022). What We Owe the Future: A Million-Year View. New York: Basic Books.

Matheny, Jason G. 2007. “Reducing the Risk of Human Extinction.” Risk Analysis 27 (5): 1335–44. https://doi.org/10.1111/j.1539-6924.2007.00960.x.

Moynihan, Thomas. 2020. X-Risk: How Humanity Discovered Its Own Extinction. Falmouth: Urbanomic.

Ord, Toby. 2020. The Precipice: Existential Risk and the Future of Humanity. New York: Hachette Books.

Parfit, Derek. 2017. On What Matters: Volume Three. Oxford: Oxford University Press.

Pinker, Steven. 2012. The Better Angels of Our Nature. New York Toronto London: Penguin Books.

———. 2018. Enlightenment Now. New York, New York: Viking, an imprint of Penguin Random House LLC.

Plant, Michael. 2022. “Will faster economic growth make us happier? The relevance of the Easterlin Paradox to Progress Studies.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/gCDsAj3K5gcZvGgbg/will-faster-economic-growth-make-us-happier-the-relevance-of [EA · GW].

Rowe, Abraham. 2022. “Critiques of EA that I Want to Read.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/n3WwTz4dbktYwNQ2j/critiques-of-ea-that-i-want-to-read [EA · GW].

Russell, Stuart J. 2019. Human Compatible: Artificial Intelligence and the Problem of Control. New York: Viking.

Schopenhauer, Arthur. 2008 [1818]. The World as Will and Representation. New York: Routledge.

Shulman, Carl. 2012. “Are Pain and Pleasure Equally Energy-Efficient?” Reflective Disequillibrium. http://reflectivedisequilibrium.blogspot.com/2012/03/are-pain-and-pleasure-equally-energy.html.

Smith, Tom W., Peter Marsden, Michael Hout, and Jibum Kim. 2022. “General Social Surveys, 1972-2022.” National Opinion Research Center. https://www.norc.org/PDFs/COVID Response Tracking Study/Historic Shift in Americans Happiness Amid Pandemic.pdf.

Tarsney, Christian. 2022. “The Epistemic Challenge to Longtermism.” Global Priorities Institute. https://globalprioritiesinstitute.org/wp-content/uploads/Tarsney-Epistemic-Challenge-to-Longtermism.pdf.

Tegmark, Max. 2017. Life 3.0: Being Human in the Age of Artificial Intelligence. New York: Alfred A. Knopf.

Tomasik, Brian. 2006. “On the Seriousness of Suffering.” Essays on Reducing Suffering. https://reducing-suffering.org/on-the-seriousness-of-suffering/.

———. 2011. “Risks of Astronomical Future Suffering.” Foundational Research Institute. https://foundational-research.org/risks-of-astronomical-future-suffering/.

———. 2013a. “The Future of Darwinism.” Essays on Reducing Suffering. https://reducing-suffering.org/the-future-of-darwinism/.

———. 2013b. “Values Spreading Is Often More Important than Extinction Risk.” Essays on Reducing Suffering. https://reducing-suffering.org/values-spreading-often-important-extinction-risk/.

———. 2014. “Why the Modesty Argument for Moral Realism Fails.” Essays on Reducing Suffering. https://reducing-suffering.org/why-the-modesty-argument-for-moral-realism-fails/.

———. 2015. “Artificial Intelligence and Its Implications for Future Suffering.” Center on Long-Term Risk. https://longtermrisk.org/artificial-intelligence-and-its-implications-for-future-suffering/.

———. 2017. “Will Future Civilization Eventually Achieve Goal Preservation?” Essays on Reducing Suffering. https://reducing-suffering.org/will-future-civilization-eventually-achieve-goal-preservation/.

Vinding, Magnus. 2020. ​​​​Suffering-Focused Ethics: Defense and Implications. Ratio Ethica.

West, Ben. 2017. “An Argument for Why the Future May Be Good.” Effective Altruism Forum. https://forum.effectivealtruism.org/posts/kNKpyf4WWdKehgvRt/an-argument-for-why-the-future-may-be-good [EA · GW].

Wolf, Clark. 1997. “Person-Affecting Utilitarianism and Population Policy; or, Sissy Jupe’s Theory of Social Choice.” In Contingent Future Persons, eds. Nick Fotion and Jan C. Heller. Dordrecht: Springer Dordrecht. https://doi.org/10.1007/978-94-011-5566-3_9.

Yudkowsky, Eliezer. 2004. “Coherent Extrapolated Volition.” The Singularity Institute. https://intelligence.org/files/CEV.pdf.

———. 2007. “The Hidden Complexity of Wishes.” LessWrong. https://www.lesswrong.com/posts/4ARaTpNX62uaL86j6/the-hidden-complexity-of-wishes [LW · GW].

1. ^

For the sake of brevity, while I have my own views of moral value and disvalue, I don’t tie this essay to any particular view (e.g., utilitarianism). For example, it can include subjective goods (valuable for a person) and objective goods (valuable regardless of people), and it can be understood as estimates or direct observation of realist good (stance-independent) or anti-realist good (stance-dependent). Some may also have moral aims aside from maximizing expected “value” per se, at least for certain senses of “expected” and “value.” There is a substantial philosophical literature on such topics that I will not wade into, and I believe such non-value-based arguments can be mapped onto value-based arguments with minimal loss (e.g., not having a duty to make happy people can be mapped onto there being no value in making happy people).

2. ^

Both population risks and quality risks can be existential risks [? · GW]—though longtermist EAs have usually defaulted to a focus on population risks, particularly extinction risks.

3. ^

For the sake of brevity, I analyze human survival and interstellar colonization together under the label “human expansion.” I gloss over possible futures in which humanity survives but does not colonize space [? · GW].

4. ^

For example, the portion of historical progress made through market mechanisms is split among Historical Progress insofar as this is a large historical trend, Value Through Intent insofar as humans intentionally progressed in this way, Value Through Evolution insofar as selection increased the prevalence of these mechanisms, and Reasoned Cooperation insofar as the intentional change was through reasoned cooperation. How is this splitting calculated? I punt to future work, but in general, I mean some sort of causal attribution measure. For example, if I grow an apple tree that is caused by both rain and soil nutrients, then I would assign more causal force to rain if and only if reducing rain by one standard deviation would inhibit growth more than reducing soil nutrients by one standard deviation. Related measures include Shapley values and LIME.

5. ^

I do not provide specific explanations for the weights in the spreadsheet because they are meant as intuitive, subjective estimates of the linear weight of the argument as laid out in the description column. As discussed in the “Future Research on the EV of Human Expansion” subsection, unpacking these weights into probability distributions and back-of-the-envelope estimates is a promising direction for better estimating the EV of human expansion. The evaluations rely on a wide range of empirical, conceptual, and intuitive evidence. These numbers should be taken with many grains of salt, but as the “superforecasting” literature evidences, it can be useful to quantify seemingly hard-to-quantify questions. The weights in this table are meant as linear, and the linear sum is -7. There are many approaches we could take to aggregating such evidence, reasoning, and intuitions; we could entirely avoid quantification entirely and take the gestalt of these arguments. If taken as logarithms of 2 (e.g., take 0 as 0, take 1 as 2, take 10 as 2^10=1024) as the prior that EA arguments tend to vary in weight by doubling rather than linear scaling, then the mean is -410. Again, these are just two of the many possible ways to aggregate arguments on this topic. Also, for methodological clarity at the risk of droning, I assign weights constantly across arguments (e.g., 2 arguments of weight +2 are the same evidential weight as 4 arguments of weight +1), though other assignment methods are reasonable, and again, other divisions of the arguments (i.e., other numbers of rows in the table) are reasonable and would make no difference in my own additive total, though they could change the logarithmic total and some other aggregations.

6. ^

While this is a very contentious view among some in EA, I should note that I’m not persuaded by, and I don’t account for, moral uncertainty [? · GW] because I don’t think a “Discoverable Moral Reality” is plausible, and I doubt I would be persuaded to act in accordance with it if it did exist (e.g., to cause suffering if suffering were stance-independently good)—though it is unclear what it would even mean for a vague, stance-independent phenomenon to exist (Anthis 2022). Moreover, I’m not compelled by arguments to account for any sort of anti-realist moral uncertainty, views which are arguably better not even described as “uncertainty” (e.g., weighting my future self’s morals, such as after a personality-altering brain injury or taking rationality- and intelligence-increasing nootropics; across different moral frameworks, such as in a Bostrom's (2009) “moral parliament”). Of course, I still account for moral cooperation [? · GW] and standard empirical uncertainty [? · GW].

7. ^

There is much more to say about how Aumann’s Agreement Theorem obtains in the real world than what I have room for here. For example, Andrew Critch states [LW · GW] that the “common priors” assumption “seems extremely unrealistic for the real world.” I’m not sure if I disagree with this, but when I describe Aumann updating, I’m not referring to a specific prior-to-posterior Bayesian update; I’m referring to the equal treatment of all the evidence going into my belief with all the evidence going into my interlocutor’s belief. If nothing else, this can be viewed as an aggregation of evidence in which each agent is still left with aggregating their evidence and prior, but I don’t like approaching such questions with a bright line between prior and posterior except in a specific prior-to-posterior Bayesian update (e.g., You believe the sky is blue but then walk outside one day and see it looks red; how should this change your belief?).

comment by John G. Halstead (Halstead) · 2022-07-01T08:34:44.790Z · EA(p) · GW(p)

My impression was that due to multiple accusations of sexual harassment, Jacy Reese Anthis was stepping back from the community. When and why did this stop?

He was evicted from Brown University in 2012 for sexual harassment (as discussed here).

And he admitted to several instances of sexual harassment (as discussed here [EA · GW]).

He also lied on his website about being a founder of effective altruism.

comment by Julia_Wise · 2022-07-04T19:59:29.934Z · EA(p) · GW(p)

Some notes from CEA:

• Several people have asked me recently whether Jacy is allowed to post on the Forum. He was never banned from the Forum, although CEA told him he would not be allowed in certain CEA-supported events and spaces.
• Three years ago, CEA thought a lot about how to cut ties with a person while not totally losing the positive impact they can have. Our take was that it’s still good to be able to read and benefit from someone’s research, even if not interacting with them in other ways.
• Someone's presence on the Forum or in most community spaces doesn’t mean they’ve been particularly vetted.
• This kind of situation is especially difficult when the full information can’t be public. I’ve heard both from people worried that EA spaces are too unwilling to ban people who make the culture worse, and from people worried that EA spaces are too willing to ban people without good enough reasons or evidence. These are both important concerns.
• We’re trying to balance fairness, safety, transparency, and practical considerations. We won’t always get that balance right. You can always pass on feedback to me at julia.wise@centreforeffectivealtruism.org, to my manager Nicole at nicole.ross@centreforeffectivealtruism.org, or via our anonymous contact form.
Replies from: MichaelStJules, Guy Raveh
comment by MichaelStJules · 2022-07-04T20:23:08.032Z · EA(p) · GW(p)

Is there more information you can share without risking the anonymity of the complainants or victims? E.g.,

1. How many complainants/witnesses were there?
2. How many separate concerning instances were there?
3. Did all of the complaints concern behaviour through text/messaging (or calls), or were some in person, too?
4. Was the issue that he made inappropriate initial advances, or that he continued to make advances after the individuals showed no interest in the initial advance? Both? Or something else?
Replies from: Julia_Wise
comment by Julia_Wise · 2022-07-05T13:47:05.482Z · EA(p) · GW(p)

I can understand why people want more info. Jacy and I agreed three years ago what each of us would say publicly about this, and I think it would be difficult and not particularly helpful to revisit the specifics now.

If anyone is making a decision where more info would be helpful, for example you’re deciding whether to have him at an event or you’re running a community space and want to think about good policies in general, please feel free to contact me and I’ll do what I can to help you make a good decision.

Replies from: anonymous_ea, Guy Raveh
comment by anonymous_ea · 2022-07-05T16:48:09.065Z · EA(p) · GW(p)

For convenience, this is CEA's statement from three years ago [EA · GW]:

We approached Jacy about our concerns about his behavior after receiving reports from several parties about concerns over several time periods, and we discussed this public statement with him. We have not been able to discuss details of most of these concerns in order to protect the confidentiality of the people who raised them, but we find the reports credible and concerning. It’s very important to CEA that EA be a community where people are treated with fairness and respect. If you’ve experienced problems in the EA community, we want to help. Julia Wise serves as a contact person [EA · GW] for the community, and you can always bring concerns to her confidentially.

By my reading, the information about the reports contained in this is:

• CEA received reports from several parties about concerns over Jacy's behavior over several time periods
• CEA found the reports 'credible and concerning'
• CEA cannot discuss details of most of these concerns because the people who raised them want to protect their confidentiality
• It also implies that Jacy did not treat people with fairness and respect in the reported incidents
• 'It’s very important to CEA that EA be a community where people are treated with fairness and respect' - why say this unless it's applicable to this case?

Julia also said [EA(p) · GW(p)] in a comment at the time that the reports were from members of the animal advocacy and EA communities, and CEA decided to approach Jacy primarily because of these rather than the Brown case:

The accusation of sexual misconduct at Brown is one of the things that worried us at CEA. But we approached Jacy primarily out of concern about other more recent reports from members of the animal advocacy and EA communities.

comment by Guy Raveh · 2022-07-05T18:00:05.120Z · EA(p) · GW(p)

Thanks for engaging in this discussion Julia. I'm writing replies that are a bit harsh, but I recognize that I'm likely missing some information about these things, which may even be public and I just don't know where to look for it yet.

Jacy and I agreed three years ago what each of us would say publicly about this, and I think it would be difficult and not particularly helpful to revisit the specifics now.

However, this sounds... not good, as if the decision on current action is based on Jacy's interests and on honoring a deal with him. I could think of a few possible good reasons for more information to be bad, e.g. that the victims prefer nothing more is said, or that it would harm CEA's ability to act in future cases. But readers can only speculate on what the real reason is and whether they agree with it.

Both here and regarding what I asked in my other comment [EA(p) · GW(p)], the reasoning is very opaque. This is a problem, because it means there's no way to scrutinize the decisions, or to know what to expect from the current situation. This is not only important for community organizers, but also for ordinary members of the community.

For example, it's not clear to me if CEA has relevant written-out policies regarding this, and what they are. Or who can check if they're followed, and how.

Replies from: Khorton
comment by Kirsten (Khorton) · 2022-07-05T19:01:32.685Z · EA(p) · GW(p)

I would expect CEA's trustees to be scrutinizing how decisions like this are made.

Replies from: Guy Raveh
comment by Guy Raveh · 2022-07-05T19:39:15.056Z · EA(p) · GW(p)

I have a general objection to this, but I want to avoid getting entirely off topic. So I'll just say, this seems to me to only shift the problem further away from the people affected.

comment by Guy Raveh · 2022-07-04T21:45:09.674Z · EA(p) · GW(p)

Three years ago, CEA thought a lot about how to cut ties with a person while not totally losing the positive impact they can have. Our take was that it’s still good to be able to read and benefit from someone’s research, even if not interacting with them in other ways.

For example, does being able to read their research have to mean giving them a stage that will help them get a higher status in the community? How did you balance the possible positive impact of that person with the negative impact that having him around might have on his victims (or their work, or on whether their even then choose to leave the forum themselves)?

comment by Kirsten (Khorton) · 2022-07-01T10:05:10.974Z · EA(p) · GW(p)

I've also been surprised to see Jacy engaging publicly with the EA community again recently, without any public communication about what's changed.

comment by Lizka · 2022-07-04T20:02:57.953Z · EA(p) · GW(p)

A comment from the moderation team:

This topic is extremely difficult to discuss publicly in a productive way. First, a lot of information isn’t available to everyone — and can’t be made available — so there’s a lot of guesswork involved. Second, there are a number of reasons to be very careful; we want community spaces to be safe for everyone, and we want to make sure that issues with safety can be brought up, but we also require a high level of civility on this Forum.

We ask you to keep this in mind if you decide to contribute to this thread. If you’re not sure that you will contribute something useful, you might want to refrain from engaging. Also, please note that you can get in touch with the Community Health team at CEA if you’d like to bring up a specific concern in a less public way.

comment by BrownHairedEevee (evelynciara) · 2022-07-03T05:53:05.546Z · EA(p) · GW(p)

I downvoted this comment. While I think this discussion is important to have, I do not think that a post about longtermism should be turned into a referendum on Jacy's conduct. I think it would be better to have this discussion on a separate post or the open thread [EA · GW].

comment by Jeff Kaufman (Jeff_Kaufman) · 2022-07-03T11:58:12.354Z · EA(p) · GW(p)

We don't have any centralized or formal way of kicking people out of EA. Instead, the closest we have, in cases where someone has done things that are especially egregious, is making sure that everyone who interacts with them is aware. Summarizing the situation in the comments here, on Jacy's first EA forum post in 3 years (Apology, 2019-03 [EA · GW]), accomplishes that much more than posting in the open thread.

This is a threaded discussion, so other aspects of the post are still open to anyone interested. Personally, I don't think Jacy should be in the EA movement and won't be engaging in any of the threads below.

Replies from: tseyipfai@gmail.com
comment by Fai (tseyipfai@gmail.com) · 2022-07-03T16:37:23.272Z · EA(p) · GW(p)

But what about the impact on the topic itself? Having the discussion heavily directed to a largely irrelevant topic, and affecting its down/upvoting situation, doesn't do the original topic justice. And this topic could potentially be very important for the long-term future.

Replies from: Jeff_Kaufman, Guy Raveh
comment by Jeff Kaufman (Jeff_Kaufman) · 2022-07-03T17:06:10.526Z · EA(p) · GW(p)

I think that's a strong reason for people other than Jacy to work on this topic.

Replies from: tseyipfai@gmail.com
comment by Fai (tseyipfai@gmail.com) · 2022-07-03T17:16:16.383Z · EA(p) · GW(p)

I think that's a strong reason for people other than Jacy to work on this topic.

Watching the dynamic here I suspect this might likely be true. But I would still like to point out that there should be a norm about how these situations should be handled. This likely won't be the last EA forum post that goes this way.

To be honest I am deeply disappointed and very worried that this post has gone this way. I admit that I might be feeling so because I am very sympathetic to the key views described in this post. But I think one might be able to imagine how they feel if certain monumental posts that are crucial to the causes/worldviews they care dearly about, went this way.

comment by Guy Raveh · 2022-07-03T17:15:56.434Z · EA(p) · GW(p)

Having the discussion heavily directed to a largely irrelevant topic

I think this topic is more relevant than the original one. Ideas, however important to the long-term future, can surface more than once. The stability of the community is also important for the long-term future, but it's probably easier to break it than to bury an idea.

affecting its down/upvoting situation

I haven't voted on the post either way despite agreeing that the writer should probably not be here. I don't know about anyone else, but I suspect the average person here is even less prone than me to downvote for reasons unrelated to content.

Replies from: tseyipfai@gmail.com
comment by Fai (tseyipfai@gmail.com) · 2022-07-03T20:15:09.894Z · EA(p) · GW(p)

I think this topic is more relevant than the original one.

Relevant with respect to what? For me, the most sensible standard to use here seems to be "whether it is relevant to the original topic of the post (the thesis being brought up, or its antithesis)".  Yes, the topic of personal behavior is relevant to EA's stability and therefore how much good we can do, or even the long-term future. But considering that there are other ways of letting people know what is being communicated here, such as starting a new post, I don't think we should use this criterion of relevance.

Ideas, however important to the long-term future, can surface more than once.

That's true, logically speaking. But that's also logically true for EA/EA-like communities. In other words, it's always "possible" that if this EA breaks, there "could be" another similar one that will be formed again. But I am guessing not many people would like to take the bet based on the "come again argument". Then what is our reason for being willing to take a similar bet with this potentially important - I believe crucial - topic (or just any topic)?

And again, the fact that there are other ways to bring up the topic of personal behavior makes it even less reasonable to use this argument as a justification here.  In other words, there seem to be way better alternatives to "reduce X-risk to EA" than commenting patterns like it's happening here, that might risk "forcing a topic away from the surface".

And we cannot say that if something "can surface more than once", then we should expect it to also "surface before it is too late", or "surface with the same influence". Timing matters, and so do the "comment sections"  of all historical discussions on a topic.

There are also some even more "down-to-earth" issues, such as the future writers on this topic experiencing difficulties of many sorts. For example, seeing this post went this way, should the writer of a next similar post (TBH, I have long thought of writing a similar post to this) just pretend that this post doesn't exist? This seems to be bad intellectual practice. But if they do cite this post, readers will see the comment section here, and one might worry that readers will be affected. More specifically, what if Jacy got this post exactly spot on? Should people who hold exactly the same view just pretend this post doesn't exist and post almost exactly the same thing?

I haven't voted on the post either way despite agreeing that the writer should probably not be here.

I am glad you tried to be fair to the topic. But just like to point out that "not voting either way" isn't absolute proof that you haven't been affected - you could have voted positively if not for the extra discussion.

I don't know about anyone else, but I suspect the average person here is even less prone than me to downvote for reasons unrelated to content.

I have to say I am much more pessimistic than you on this. I think it's psychologically quite natural that with such comments in the comment section, one might find it hard to concentrate through such a long piece, especially if one takes a stance against the writers' behavior.

I am mindful of the fact that I am contributing to what I am suspecting to be bad practice here, so I am not going to comment on this direction further than this.

Replies from: Guy Raveh
comment by Guy Raveh · 2022-07-03T20:59:18.560Z · EA(p) · GW(p)

Thanks for the detailed reply. I think you raised good points and I'll only comment on some of them.

Mainly, I think raising the issue somewhere else wouldn't be nearly as effective, both in terms of directly engaging Jacy and of making his readers aware.

I am glad you tried to be fair to the topic. But just like to point out that "not voting either way" isn't absolute proof that you haven't been affected - you could have voted positively if not for the extra discussion.

I noticed the post much before John made his comment. I didn't read it thoroughly or vote then, so I haven't changed my decision - but yes, I guess I'd be very reluctant to upvote now. So my analysis of myself wasn't entirely right.

I am mindful of the fact that I am contributing to what I am suspecting to be bad practice here, so I am not going to comment on this direction further than this.

Hmm. Should I have not replied then? ... I considered it, but eventually decided some parts of the reply were important enough.

comment by John G. Halstead (Halstead) · 2022-07-03T13:44:02.514Z · EA(p) · GW(p)

I think it is a good place to have the discussion. Apparently someone has been the subject of numerous sexual harassment allegations throughout his life is turning up at EA events again. This is very concerning.

Replies from: tseyipfai@gmail.com
comment by Fai (tseyipfai@gmail.com) · 2022-07-03T20:22:58.311Z · EA(p) · GW(p)

But wouldn't a new post on this topic serve the same purpose of expressing and discussing this concern, without having the effects of affecting this topic?

comment by DonyChristie · 2022-07-01T22:40:26.927Z · EA(p) · GW(p)

I recommend a mediator be hired to work with Jacy and whichever stakeholders are relevant (speaking broadly). This will be more productive than a he-said she-said forum discussion that is very emotionally toxic for many bystanders.

comment by Guy Raveh · 2022-07-02T16:46:21.734Z · EA(p) · GW(p)

Who do you think the relevant stakeholders are?

It seems to me that "having a safe community" is something that's relevant to the entire community.

I don't think long, toxic argument threads are necessary as a decision seems to have been made 3 years ago. The only question is what's changed. So I'm hoping we see some comment from CEA staff on the matter.

comment by John G. Halstead (Halstead) · 2022-07-03T13:37:58.347Z · EA(p) · GW(p)

I imagine Jacy turning up to EA events is more toxic for the women that Jacy has harasssed and for the women he might harass in the future. There is no indication that he has learned his lesson. He is totally incapable of taking moral responsibility for anything.

This is not he-said she-said. I have only stated known facts so far and I am surprised to see people dispute them. The guy has been kicked out of university for sexual misconduct and banned from EA events for sexual misconduct. He should not be welcome in the community.

Replies from: Davidmanheim
comment by Davidmanheim · 2022-07-04T14:33:52.357Z · EA(p) · GW(p)

I'm confused that you seem to claim strong evidence on the basis on a variety of things that seem like weak evidence to me. While I am sure details should not be provided, can you clarify whether you have non-public information about what happened post 2016 that contradicts what Kelly and Jacy have said publicly about it?

comment by Guy Raveh · 2022-07-01T10:26:42.143Z · EA(p) · GW(p)

Thanks for writing this.

As everyone here knows, there has been an influx of people into EA and the forum in the last couple years, and it seems probable that most of the people here (including me) wouldn't have known about this if not for this reminder.

Replies from: Yitz
comment by Yitz · 2022-07-05T23:19:14.935Z · EA(p) · GW(p)

I was personally unaware of the situation until reading this comment thread, so can confirm

comment by John G. Halstead (Halstead) · 2022-07-01T09:57:37.212Z · EA(p) · GW(p)

Jacy Reese claims that the allegations discussed in the Forum post centre on 'clumsy online flirting'. We don't really know what the allegations are, but CEA :

• Severed ties with the Sentience Institute
• Stopped being their fiscal sponsor
• Banned Jacy from all of their events
• Made him write an apology post

We have zero reason to believe Jacy about the substance of the allegations, given his documented history of lying and incentives to lie in the case.

Replies from: Harrison D
comment by Harrison Durland (Harrison D) · 2022-07-01T15:21:47.277Z · EA(p) · GW(p)

I don’t think (or, you have not convinced me that) it’s appropriate to use CEA’s actions as strong evidence against Jacy. There are many obvious pragmatic justifications to do so that are only slightly related to the factual basis of the allegations—I.e., even if the allegations are unsubstantiated, the safest option for a large organization like CEA would be to cut ties with him regardless. Furthermore, saying someone has “incentives to lie” about their own defense also feels inappropriate (with some exceptions/caveats), since that basically applies to almost every situation where someone has been accused. The main thing that you mentioned which seems relevant is his “documented history of lying,” which (I say this in a neutral rather than accusatory way) I haven’t yet seen documentation of.

Ultimately, these accusations are concerning, but I’m also quite concerned of the idea of throwing around seemingly dubious arguments in service of vilifying someone.

comment by John G. Halstead (Halstead) · 2022-07-01T16:16:45.655Z · EA(p) · GW(p)

It is bizarre to say that the aforementioned evidence is not strong evidence against Jacy. He was thrown out of university for sexual misconduct. CEA then completely disassociated itself from him because of sexual misconduct several years later. Multiple people at multiple different times in his life have accused him of sexual misconduct.

I think we are agreed that he has incentives to lie. He has also shown that he is a liar.

comment by John G. Halstead (Halstead) · 2022-07-01T16:12:56.091Z · EA(p) · GW(p)

on his history of lying. https://nonprofitchronicles.com/2019/04/02/the-peculiar-metoo-story-of-animal-activist-jacy-reese/

Replies from: Harrison D
comment by Harrison Durland (Harrison D) · 2022-07-01T17:14:27.122Z · EA(p) · GW(p)

Please provide specific quotes, I spent a few minutes reading the first part of that without seeing what you were referring to

Replies from: Harrison D
comment by Harrison Durland (Harrison D) · 2022-07-01T17:25:30.075Z · EA(p) · GW(p)

If you’re referring to the same point about his claim to be a cofounder, I did just see that. However, unless I see some additional and/or more-egregious quotes from Jacy, I have a fairly negative evaluation of your accusation. Perhaps his claim was a bit exaggerative combined with being easily misinterpreted, but it seems he has walked it back. Ultimately, this really does not qualify in my mind as “a history of lying.”

comment by John G. Halstead (Halstead) · 2022-07-03T13:42:58.315Z · EA(p) · GW(p)

You could also read the entirety of the research he produced for ACE, which it would be fair to describe as 'comprised entirely of bullshit'.

To stress, it is completely ludicrous for him to claim that he is a co-founder of effective altruism, unless he interpreted the claim to be true of like Sasha Cooper or Pablo Stafforini. They would never say that they are founders of effective altruism because it is not true and they are not sociopaths (like Jacy is).

Replies from: Lizka
comment by Lizka · 2022-07-04T20:04:30.116Z · EA(p) · GW(p)

comment by John G. Halstead (Halstead) · 2022-07-03T13:39:16.919Z · EA(p) · GW(p)

I'm not vilifying the guy. His actions have done that for him and I have just described his actions.

comment by John G. Halstead (Halstead) · 2022-07-01T16:13:37.128Z · EA(p) · GW(p)

on his history of lying. https://nonprofitchronicles.com/2019/04/02/the-peculiar-metoo-story-of-animal-activist-jacy-reese/

comment by sapphire (deluks917) · 2022-07-04T20:06:54.245Z · EA(p) · GW(p)

In most cases where I am actually familiar with the facts CEA has behaved very poorly. They have both been way too harsh on good actors and failed to take sufficient action against bad actors (ex Kathy Forth). They did handle some very obvious cases reasonably though (Diego). I don't claim I would do a way better job but I don't trust CEA to make these judgments.

comment by Timothy Chan · 2022-07-01T15:57:39.425Z · EA(p) · GW(p)

Could you

1.  Quote where in the linked text or elsewhere 'he admitted to several instances of sexual harassment'?
2. As someone asked in another comment, 'provide links or specific quotes regarding his claim of being a founder of EA?'
comment by John G. Halstead (Halstead) · 2022-07-01T16:10:57.370Z · EA(p) · GW(p)

1 - CEA says that the complaints relate to inappropriate behaviour in the sexual realm which they found 'credible and concerning' and which he pretends to apologise for in the apology post, presumably to avoid a legal battle

2- https://nonprofitchronicles.com/2019/04/02/the-peculiar-metoo-story-of-animal-activist-jacy-reese/

comment by Timothy Chan · 2022-07-01T16:39:18.980Z · EA(p) · GW(p)

1 - CEA says that the complaints relate to inappropriate behaviour in the sexual realm which they found 'credible and concerning' and which he pretends to apologise for in the apology post, presumably to avoid a legal battle

I still don't see where 'he admitted to several instances of sexual harassment' as you've claimed.

comment by John G. Halstead (Halstead) · 2022-07-01T16:56:03.226Z · EA(p) · GW(p)

The post is called 'apology' apology usually means you are admitting to wrongdoing. In this case, the wrongdoing was relating to sexual conduct. What do you think it was an apology for?

Replies from: Timothy Chan
comment by Timothy Chan · 2022-07-01T22:04:07.811Z · EA(p) · GW(p)

Then I think it'd be more accurate if you write 'he admitted to several instances of what I consider to be sexual harassment'.

At the moment, your claim that 'he admitted to several instances of sexual harassment' seems very misleading. You haven't provided evidence that supports the claim that he confessed to committing such crimes.

EDIT: I'm approaching this issue with much less lived experience than some of the other commenters here. There appear to be more individuals than just John who are confident in the allegations, so perhaps 'what [John considers] to be sexual harassment' is not enough, and instead 'what [X, Y and so on...] consider to be sexual harassment' is better. (From what I can tell, the apology [EA · GW] post also features some comments that push back on that confidence, to varying extents, and it may be worth mentioning that too. I'm not following this issue extensively and I don't know if there have been any updates since that post.) I still think John's comment, as it stands, ('he admitted to several instances of sexual harassment') is misleading and harmful to community norms. I think people should point out bad epistemics despite possible social pressures to do otherwise.

Replies from: JamesOz
comment by James Ozden (JamesOz) · 2022-07-02T10:58:46.299Z · EA(p) · GW(p)

Then I think it'd be more accurate if you write 'he admitted to several instances of what I consider to be sexual harassment'.

I'm slightly confused about this. Do you believe that Jacy did commit several instance of sexual harassment but he just hasn't admitted it? Or you don't believe Jacy has committed sexual harassment at all?

If the latter: These instances aren't what John considers sexual harassment, it's what several women (over 5 at least from my reading of the Apology post [EA · GW]) consider to be sexual harassment. If this reasonably large number of women didn't think it was sexual harassment, they wouldn't have complained to CEA or others within the community. Therefore I think we can be somewhat confident that Jacy has made sexual advances that a non-negligible number of women consider to be sexual harassment. Subsequently, as John stated, he made an apology post saying sorry for these instances of sexual harassment (of course he would never put it like that, but just because you don't specifically say "sorry for sexual harassment" it doesn't mean it doesn't happen). Basically, we have several independent pieces of evidence of Jacy being involved in sexual harassment (reports from 5+ women, being distanced from CEA, being expelled from Brown university) with the only piece of evidence pointing against this being Jacy's own comments, which is of course biased. Given this, I think a claim that Jacy hasn't been involved in sexual harassment seems wrong.

If the former: I think this is quite pedantic and it's irrelevant whether Jacy admits to the bad behaviour, if we have enough evidence to be confident it happened.

Replies from: Timothy Chan
comment by Timothy Chan · 2022-07-02T14:35:34.912Z · EA(p) · GW(p)

Honestly, I'm new - I only just became aware of all this. I think I haven't had enough time to make a judgment myself.

But what I do know is that John's initial claim that Jacy 'admitted to several instances of sexual harassment' seemed misleading, and I decided to point that out because there was a lack of people who did so, which seems harmful to community norms.

comment by James Ozden (JamesOz) · 2022-07-03T19:48:24.447Z · EA(p) · GW(p)

First, thanks for being honest and saying you're not particularly well-informed on this, that definitely helped me approach your comment with less judgement. I've also seen that you edited your initial comment so thanks for that.

I do however still have some concerns about the comments you made. I agree that the claim "Jacy admitted several instances of sexual harassment" isn't very easily verifiable (e.g. the discussion on what the "apology" is for). However, I think that this is largely irrelevant and begins a semantic discussion that is totally missing the original point, and generally missing the forest for the trees.

The main point John was making is that Jacy has been accused and punished (maybe not the right word?) for several instances of sexual harassment over his career. In my opinion it is almost totally irrelevant whether Jacy himself admits this, as I (and I think many others) think there is very reasonable evidence to believe these instances of sexual harassment happened. Launching into a semantics discussion about whether Jacy admitted it seems to detract from the key point in unhelpful ways, although I agree that John's comment might have been better if he had totally excluded the line "Jacy admitted several instances of sexual harassment". Again, I agree there are some epistemic benefits to calling out statements that don't seem correct, but I think there are also some large downsides to the way you did this in this instance. [edited last sentence for clarity].

Then I think it'd be more accurate if you write 'he admitted to several instances of what I consider to be sexual harassment'.

is that it brings about an element of questioning of exactly how much Jacy's acts constituted 'sexual harassment'. Women are often accused of making things up, exaggerating claims or otherwise reporting "locker room banter" or "harmless jokes" as sexual harassment, and I felt your comments were adding to this. This feels particularly worrying within the EA movement, which is already only 29% female [EA · GW], as it could show women that EA spaces are not safe for women due to lack of care around sexual harassment issues.

For context, I feel personally strongly about this as I've heard from several close women friends of mine who have attended EA events or otherwise met male EAs who have been misogynistic towards them, in ways that have deterred them from becoming more involved in EA. In short, I think EA spaces are already challenging for women to feel comfortable in, without us making comments that seem to trivialise issues of sexual harassment.

I think the fact that you also said "I think I haven't had enough time to make a judgment myself." adds to this. I don't think it requires a huge amount of effort to update towards 'Jacy very likely committed instances of sexual harassment' based on several independent reports of sexual harassment, expulsion from university, his apology,  etc.  To not update towards this after even a short consideration again implies to me that you're doubting whether true sexual harassment even occurred (e.g. ignoring reports from several (over 5?) women for comments by 1 man) which would add to the notion of EA spaces not being safe for women.

Sorry for the slight rant but these are issues that my friends have been affected by within EA spaces, and something I feel strongly about.

Replies from: Khorton, Timothy Chan
comment by Kirsten (Khorton) · 2022-07-03T20:04:01.800Z · EA(p) · GW(p)

I agree with this comment. I find the implication that Jacy's views deserves equal or greater weight than the testimony of multiple women troubling.

Replies from: JamesOz
comment by James Ozden (JamesOz) · 2022-07-03T22:57:08.401Z · EA(p) · GW(p)

I'm confused why this is getting downvoted, can someone explain?

comment by Timothy Chan · 2022-07-04T19:09:02.824Z · EA(p) · GW(p)

Thank you for being more charitable after reading my comment, and for your effort in a detailed response.

Again, I agree there are some epistemic benefits to calling out statements that don't seem correct, but I think there are also some large downsides to this in this case.

I think I still prefer to challenge a claim that quite blatantly (but probably unintentionally) misleads people into thinking that someone confessed to committing a crime, a claim placed in the highest upvoted comment on a post receiving a lot of attention. I think we should be suspicious of thinking ‘let bad arguments persist because criticizing them would be bad’.

I lean towards disagreeing with your claim that it’s net negative overall to point out that inaccuracy but caveat that I’m not certain of how confident I should be in that position.

One reason I think there are positives is that there are indeed cases in which allegations don’t hold up, and innocent people get hurt (note I’m not saying that this necessarily applies to this case, and from what I can tell it seems to constitute a low percentage of cases). It makes sense to consider the interests of those accused but innocent, in addition to the interests of sexual harassment victims and potential victims.

I think ensuring we aren’t overzealous requires us to uphold certain norms, even when it’s challenging to do so socially. For context, I’m not in the Anglosphere at the moment - but I do see some trends there involving strong emotions and accompanying criticisms that do worry me, and I don’t think this community should be overly concerned with potential criticism so as to not speak up to uphold those norms.

I had to make several comments following up on the misleading statement because John didn’t deliver on the statement, nor take note and rephrase his writing to be less misleading. Unfortunately, he still hasn’t done so.

is that it brings about an element of questioning

On how I’ve phrased a possible rephrasing (and the updated possible rephrasing in the edited part of the comment) of John’s statement, to reduce the misleadingness, I wasn’t as aware as you were of your concerns and didn’t know it has risks of making people feel questioned/not taken seriously when I wrote that. Your concerns make sense and I’ll keep them in mind. But I also haven’t made up my mind on the extent to which it’s important to be mindful of how I should present what I consider truthful statements (i.e. we are the ones deciding on what to make of the available evidence - so we are in fact the ones who 'consider' whether it constitutes sexual harassment) - in order to reduce the risk of such feelings.

I think the fact that you also said "I think I haven't had enough time to make a judgment myself." adds to this... To not update towards this after even a short consideration...

I think we have different understandings of the term ‘judgment’ here. In this quite serious context (which sometimes involves the law), I take ‘judgment’ to mean much more (as in 'pass judgment') than updating beliefs. I didn’t say that the evidence didn’t update my views (actually I think it’ll be absurd if it didn’t), nor did I imply that the views of one accused ‘deserves equal or greater weight’ than the testimony of multiple accusers (as Khorton wrote). That multiple people have made complaints should indeed update us towards thinking that sexual harassment happened.

But again, I take ‘judgment’ to mean much more than ‘updating’. When I said that “I think I haven't had enough time to make a judgment myself”, I meant there wasn’t enough time to make a solid conclusion about these especially troubling allegations (edit: time isn't the only thing you need - it also depends on whether there's sufficient information to analyze). This might not be the approach some people take, but there are huge personal costs at stake for the parties involved, and I don’t want to condemn anyone so quickly. Also, realize that I wrote that “I think I haven't had enough time to make a judgment myself” within one day of learning of the allegations. I think it's reasonable to be cautious of confidence.

Unfortunately, I won’t be able to comment much more. I’m a slow writer and I’m exhausted from having to follow up so much. I only wanted to make that point about John’s comment and get him to follow better practices - but that’s been unsuccessful. I hope our future interactions could be under better circumstances.

Replies from: JamesOz
comment by James Ozden (JamesOz) · 2022-07-05T14:30:07.628Z · EA(p) · GW(p)

Thanks for the reply Timothy, and I totally appreciate you choosing to not engage again as this can be quite time and energy consuming. There's one thing I wasn't clear enough in my original comment which I've now edited which might mean we're not as misaligned as one might think!

Namely, I didn't say (or even necessarily think) that your comment on the truthfulness of John's claim was net negative, as you suggest. I've edited the original comment but in practice what I meant was, I think there's better ways of doing so, without questioning the sexual harassment claims actually made by the women affected in these incidents.  So overall I agree it's important to point about claims that are untruthful, but I also think you did this in a way that a) casted doubted on the actual sexual harassment, which IMO seems very likely so it is insensitive to suggest otherwise and b) is damaging to the EA community as a safe place for women.

For reference, this is what I updated my sentence to in the previous comment:

Again, I agree there are some epistemic benefits to calling out statements that don't seem correct, but I think there are also some large downsides to the way you did this in this instance. [edited last sentence only]

comment by John G. Halstead (Halstead) · 2022-07-03T14:39:15.011Z · EA(p) · GW(p)

How would you have described, in plain English, the apology post?

I think it is important to read between the lines of his apology post. Having received numerous complaints, CEA made Jacy write an apology post. He claims not to know what the complaints are about, but tries to give the impression that it is because of saying stuff like "hey cutie" to people. As mentioned below, there must be an awful lot of inept flirting in the community given the social awkwardness of EAs and the gender skew of the community. Despite that, to my knowledge Jacy is the only person who has ever been banned from all EA events for sexual misconduct. This suggests that allegations are probably worse than Jacy suggests.

Note that CEA cannot reveal the nature of the accusations in order to protect the identity of the complainants.

We then also learn that he was thrown out of university for sexual misconduct in 2012. This was in 2012, before the start of metoo.  Someone at Brown at this time told me that no-one was expelled by Brown for sexual misconduct during the whole time they were there. This suggests that the allegations were bad.

comment by John G. Halstead (Halstead) · 2022-07-01T16:19:26.299Z · EA(p) · GW(p)

How else would you define the apology post other than an apology for sexual harassment? I would have thought the debate would be about an appropriate time for him to rejoin the community not about whether he actually committed sexual harassment. Or whether he was unfortunate enough for multiple women to independently accuse of him sexual harassment throughout his life

comment by Jacy · 2022-07-01T16:01:22.680Z · EA(p) · GW(p)

- I’ve never harassed anyone, and I’ve never stated or implied that I have.  I have apologized for making some people uncomfortable with “coming on too strong” in my online romantic advances. As I've said before in that Apology [EA · GW], I never intended to cause any discomfort, and I’m sorry that I did so. There have, to my knowledge, been no concerns about my behavior since I was made aware of these concerns in mid-2018.

- I didn’t lie on my website. I had (in a few places) described myself as a “co-founder” of EA [Edit: Just for clarity, I think this was only on my website for a few weeks? I think I mentioned it and was called it a few times over the years too, such as when being introduced for a lecture. I co-founded the first dedicated student group network,  helped set up and moderate the first social media discussion groups, and was one of the first volunteers at ACE as  a college student. I always favored a broader-base view of how EA emerged than what many perceived at the time (e.g., more like the founders of a social movement than of a company). Nobody had pushed back against "co-founder" until 2019, and I stopped using the term as soon as there was any pushback.], as I think many who worked to build EA from 2008-2012 could be reasonably described. I’ve stopped using the term because of all the confusion, which I describe a bit in “Some Early History of Effective Altruism.”

- Regarding SI, we were already moving on from CEA’s fiscal sponsorship and donation platform once we got our 501c3 certification in February 2019, so “stopped” and “severed ties” seem misleading.

- CEA did not make me write an apology. We agreed on both that apology document and me not attending CEA events as being the right response to these concerns. I had already written several apologies that were sent privately to various parties without any involvement from CEA.

- There was no discussion of my future posting on the EA Forum, nor to my knowledge any concerns about my behavior on this or other forums.

Otherwise, I have said my piece in the two articles you link, and I don’t plan to leave any more comments in this thread. I appreciate everyone’s thoughtful consideration.

comment by Kirsten (Khorton) · 2022-07-02T09:18:42.507Z · EA(p) · GW(p)

Hi Jacy, you said in your apology "I am also stepping back from the EA community more generally, as I have been planning to since last year in order to focus on my research."

I haven't seen you around since then, so was surprised to see you attend an EA university retreat* and start posting more about EA. Would you describe yourself as stepping back into the EA community now?

Replies from: Jacy
comment by Jacy · 2022-07-02T20:04:41.349Z · EA(p) · GW(p)

Hi Khorton, I wouldn't describe it as stepping back into the community, and I don't plan on doing that, regardless of this issue, unless you consider occasional posts and presentations or socializing with my EA friends as such. This post on the EV of the future was just particularly suited for the EA Forum (e.g., previous posts on it), and it's been 3 years since I published that public apology and have done everything asked of me by the concerned parties (around 4 years since I was made aware of the concerns, and I know of no concerns about my behavior since then).

I'm not planning to comment more here. This is in my opinion a terrible place to have these conversations, as Dony pointed out as well.

comment by John G. Halstead (Halstead) · 2022-07-03T13:50:40.435Z · EA(p) · GW(p)

It's a comment that is typical of Jacy - he cannot help but dissemble. "I am also stepping back from the EA community more generally, as I have been planning to since last year in order to focus on my research." It makes it sound like he was going to step back anyway even while he was touting himself as an EA co-founder and was about to promote his book! In fact, if you read between the lines, CEA severed ties between him and the community. He then pretends that he was going to do this anyway. The whole apology is completely pathetic.

comment by John G. Halstead (Halstead) · 2022-07-03T15:02:27.493Z · EA(p) · GW(p)

Why should we believe that you have in fact changed? You were kicked out of Brown for sexual misconduct. You claim to believe that the allegations at that time were false. Instead of being extra-careful in your sexual conduct following this, at least five women complain to CEA about your sexual sexual misconduct, and CEA calls the complaints 'credible and concerning'. There is zero reason to think you have changed.

Plus, you're a documented liar, so we should have no reason to believe you.

comment by John G. Halstead (Halstead) · 2022-07-01T17:04:19.795Z · EA(p) · GW(p)
• Were you expelled from Brown for sexual harassment? Or was that also for clumsy online flirting?
• You did lie on your website. It is false that you are a co-founder of effective altruism. There is not a single person in the world who thinks that is true, and you only said it to further your career. That you can't even acknowledge that that was a lie speaks volumes.
• Perhaps CEA can clarify whether there was any connection between the allegations and CEA severing ties with SI.
• Were the allegations reported to the Sentience Institute before CEA? Why did you not write a public apology before CEA approached you with the allegations? You agreeing with CEA to being banned from EA events and you being banned from EA events are the same thing.
• The issue is how long you should 'step away' from the community for.
Replies from: Owen_Cotton-Barratt
comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) · 2022-07-01T21:29:09.712Z · EA(p) · GW(p)

I wouldn't have described Jacy as a co-founder of effective altruism and don't like him having had it on his website, but it definitely doesn't seem like a lie to me (I kind of dislike the term "co-founder of EA" because of how ambiguous it is).

Anyway I think calling it a lie is roughly as egregious a stretch of the truth as Jacy's claim to be a co-founder (if less objectionable since it reads less like motivated delusion). In both cases I'm like "seems wrong to me, but if you squint you can see where it's coming from".

comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) · 2022-07-01T22:37:44.067Z · EA(p) · GW(p)

[meta for onlookers: I'm investing more energy into holding John to high standards here than Jacy because I'm more convinced that John is a good faith actor and I care about his standards being high. I don't know where Jacy is on the spectrum from "kind of bad judgement but nothing terrible" to "outright bad actor", but I get a bad smell from the way he seems to consistently turns to present things in a way that puts him in a relatively positive light and ignores hard questions, so absent further evidence I'm just not very interested in engaging]

comment by John G. Halstead (Halstead) · 2022-07-03T14:30:20.516Z · EA(p) · GW(p)

"I don't know where Jacy is on the spectrum from "kind of bad judgement but nothing terrible" to "outright bad actor"."

I don't understand this and claims like it. To recap, he was thrown out of university in 2012 for sexual misconduct. Someone who was at Brown around this time told me that no-one else was expelled from Brown for sexual misconduct the entire they were there. This suggests that his actions were very bad.

Despite being expelled from Brown, at least five women in the EA community then complain to CEA because of his sexual misconduct. CEA thinks these actions are bad enough to ban him from all EA events and dissociate from him completely. Despite Jacy giving the impression that was due to clumsy flirting, I strongly doubt that this is true. Clumsy flirting must happen a fair amount in this community given the social awkwardness of EAs, but few people are expelled from the community as a result. This again suggests that the allegations against Jacy are very bad.

This should update us towards the view that the Brown allegations were also true (noting that Jacy denies that they are true).

In your view he also makes statements that are gross exaggerations/delusional in order to further his career (though I mustn't say that he lied).

I think we have enough evidence for the 'bad actor' categorisation.

comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) · 2022-07-03T16:44:18.941Z · EA(p) · GW(p)

It's from "man things in the world are typically complicated, and I haven't spent time digging into this, but although there surface level facts look bad I'm aware that selective quoting of facts can give a misleading impression".

I'm not trying to talk you out of the bad actor categorization, just saying that I haven't personally thought it through / investigated enough that I'm confident in that label. (But people shouldn't update on my epistemic state! It might well be I'd agree with you if I spent an hour on it; I just don't care enough to want to spend that hour.)

comment by John G. Halstead (Halstead) · 2022-07-03T14:35:54.820Z · EA(p) · GW(p)

Here is an interesting post on the strength of the evidence provided by multiple independent accusations of sexual misconduct throughout one's life.

http://afro-optimist.blogspot.com/2018/09/why-you-should-probably-believe-ford.html

comment by John G. Halstead (Halstead) · 2022-07-03T14:37:38.829Z · EA(p) · GW(p)

Isn't the upshot of this that you want to be more critical of good faith actors than bad faith actors? That seems wrong to me.

Replies from: Owen_Cotton-Barratt
comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) · 2022-07-03T16:37:26.742Z · EA(p) · GW(p)

Yes, I personally want to do that, because I want to spend time engaging with good faith actors and having them in gated spaces I frequent.

In general I have a strong perfectionist streak, which I channel only to try to improve things which are good enough to seem worth the investment of effort to improve further. This is just one case of that.

(Criticizing is not itself something that comes with direct negative effects. Of course I'd rather place larger sanctions on bad faith actors than good faith actors, but I don't think criticizing should be understood as a form of sanctioning.)

comment by throwaway01 · 2022-07-01T23:47:56.698Z · EA(p) · GW(p)

Is Jacy's comment above where he seemed to present things in a way that puts him in a relatively positive light and ignores hard questions? Or the Apology post? I don't really see how you're getting that smell. John wrote a very negative comment, whether or not you think that negativity was justified, so it makes sense for Jacy to reply by pointing out inaccuracies that would make him seem more positive. I think it would take an extremely unusual person to engage in a discussion like this that isn't steering in a more positive direction towards them. I also just took the questions he "ignored" as being ones where he doesn't see them as inaccurate.

This is all not even mentioning how absolutely miserable and tired Jacy must be to go through this time and time again, again regardless of what you think of him as a person...

comment by John G. Halstead (Halstead) · 2022-07-03T14:02:25.860Z · EA(p) · GW(p)

In my opinion, this is a bizarre comment. You seem to have more sympathy with Jacy, who has been accused of sexual harassment at least six times in this life for having to defend himself  than eg the people who are reading this who he has harassed, or the people who are worried that he might harass them in the future as he tries to rejoin the community.

comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) · 2022-07-02T00:03:22.554Z · EA(p) · GW(p)

Actually no I got reasonably good vibes from the comment above. I read it as a bit defensive but it's a fair point that that's quite natural if he's being attacked.

I remember feeling bad about the vibes of the Apology post but I haven't gone back and reread it lately. (It's also a few years old, so he may be a meaningfully different person now.)

Replies from: Owen_Cotton-Barratt
comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) · 2022-07-02T00:11:29.084Z · EA(p) · GW(p)

I actually didn't mean for any of my comments here to get into attacks on our defence of Jacy. I don't think I have great evidence and don't think I'm a very good person to listen to on this! I just wanted to come and clarify that my criticism of John was supposed to be just that, and not have people read into it a defence of Jacy.

(I take it that the bar for deciding personally to disengage is lower than for e.g. recommending others do that. I don't make any recommendations for others. Maybe I'll engage with Jacy later; I do feel happier about recent than old evidence, but it hasn't yet moved me to particularly wanting to engage.)

comment by John G. Halstead (Halstead) · 2022-07-02T07:11:30.936Z · EA(p) · GW(p)

So, are you saying it is an honest mistake but not a lie? His argument for being a co-founder seems to be that he was involved in the utilitarian forum Felicifia in 2008. He didn't even found it. I know several other people who founded or were involved in that forum and none of them has ever claimed to be a founder of effective altruism on that basis. Jacy is the only person to do that and it is clear he does it in order to advance his claim to be a public intellectual because it suggests to the outside world that he was as influential as Will MacAskill, Toby Ord, Elie Hassenfeld, and Holden Karnofsky, which he wasn't and he knows he wasn't.

The dissembling in the post is typical of him. He never takes responsibility for anything unless forced to do so.

Replies from: Owen_Cotton-Barratt
comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) · 2022-07-02T08:00:44.300Z · EA(p) · GW(p)

I'm saying it's a gross exaggeration not a lie. I can imagine someone disinterested saying "ok but can we present a democratic vision of EA where we talk about the hundred founders?" and then looking for people who put energy early into building up the thing, and Jacy would be on that list.

(I think this is pretty bad, but that outright lying is worse, and I want to protect language to talk about that.)

comment by Lukas_Gloor · 2022-07-02T12:57:06.933Z · EA(p) · GW(p)

I want to flag that something like "same intention as outright lying, but doing it in a way to maximize plausible deniability" would be just as bad as outright lying. (It is basically "outright lying" but in a not stupid fashion.)

However, the problem is that sometimes people exaggerate or get things wrong for more innocuous reasons like exaggerated or hyperbolic speech or having an inflated sense of one's importance in what's happening. Those cases are indeed different and deserve to be treated very different from lying (since we'd expect people to self-correct when they get the feedback, and avoid mistakes in the future). So, I agree with the point about protecting language. I don't agree with the implicit message "it's never as bad as outright lying when there's an almost-defensible interpretation somewhere." I think protecting the language is important for reasons of legibility and epistemic transparency, not so much because the moral distinction is always clean-cut.

Replies from: Owen_Cotton-Barratt
comment by James Ozden (JamesOz) · 2022-07-02T10:41:56.356Z · EA(p) · GW(p)

This feels off to me. It seems like Jacy deliberately misled people to think that he was a co-founder of EA, to likely further his own career. This feels like a core element of lying, to deceive people for personal gain, which I think is the main reason one would claim they're the co-founder of EA when almost no one else would say this about them.

Sure I think it can also be called "gross exaggeration" but where do you think the line is between "gross exaggeration" and "lying"? For me, lying means you say something that isn't true (in the eyes of most people) for significant personal gain (i.e. status) whereas gross exaggeration is a smaller embellishment and/or isn't done for large personal gain.

comment by John G. Halstead (Halstead) · 2022-07-03T14:10:06.248Z · EA(p) · GW(p)

You are taking charitable interpretations to an absolute limit here. You seem to be saying "maybe Jacy was endorsing a highly expansive conception of 'founding' which implies that EA has hundreds of founders'". This is indeed a logical possibility. But I think the correct credence to have in this possibility is ~0.  Instead, we should have ~1 credence in  the following "he said it knowing it is not true in order to further his career". And by 'founding' he meant, "I'm in the same bracket as Will MacAskill". Otherwise, why put it on your website and in your bio?

Replies from: Owen_Cotton-Barratt
comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) · 2022-07-03T16:40:17.725Z · EA(p) · GW(p)

I don't think it's like "Jacy had an interpretation in mind and then chose statements". I think it's more like "Jacy wanted to say things that made himself look impressive, then with motivated reasoning talked himself into thinking it was reasonable to call himself a founder of EA, because that sounded cool".

(Within this there's a spectrum of more and less blameworthy versions, as well as the possibility of the straight-out lying version. My best guess is towards the blameworthy end of the not-lying versions, but I don't really know.)

comment by John G. Halstead (Halstead) · 2022-07-03T14:34:50.524Z · EA(p) · GW(p)

So rather than a lie, you think it might be a motivated delusion. Motivated delusions are obviously false. But then at the end you say it is not obviously false. This is inconsistent.

Replies from: Owen_Cotton-Barratt
comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) · 2022-07-10T14:32:03.052Z · EA(p) · GW(p)

True/false isn't a dichotomy. The statement here was obviously a stretch / not entirely true. I'd guess it had hundreds of thousands of microlies ( https://forum.effectivealtruism.org/posts/SGFRneArKi93qbrRG/truthful-ai?commentId=KdG4kZEu9GA4324AE [EA(p) · GW(p)] )

But I think it's important to reserve terms like "lie" for "completely false", because otherwise you lose the ability to police that boundary (and it's important to police it, even if I also want higher standards enforced around many spaces I interact with).

comment by Harrison Durland (Harrison D) · 2022-07-01T15:11:01.739Z · EA(p) · GW(p)

Could you provide links or specific quotes regarding his claim of being a founder of EA? Perhaps unlikely, but maybe through web archive?

comment by Kirsten (Khorton) · 2022-07-01T15:29:10.095Z · EA(p) · GW(p)

It's briefly referenced in this recent post, though I don't think this is what John was talking about.

https://jacyanthis.com/some-early-history-of-effective-altruism

Replies from: RyanCarey
comment by John G. Halstead (Halstead) · 2022-07-01T16:13:16.437Z · EA(p) · GW(p)

https://nonprofitchronicles.com/2019/04/02/the-peculiar-metoo-story-of-animal-activist-jacy-reese/

comment by anonymous_ea · 2022-07-02T21:12:49.203Z · EA(p) · GW(p)

From [EA(p) · GW(p)] Jacy:

this was only on my website for a few weeks at most... I believe I also casually used the term elsewhere, and it was sometimes used by people in my bio description when introducing me as a speaker.

comment by Oliver Sourbut · 2022-06-30T15:42:00.545Z · EA(p) · GW(p)

My experience is different, with maybe 70% of AI x-risk researchers I've discussed with being somewhat au fait with the notion that we might not know the sign of future value conditional on survival. But I agree that it seems people (myself included) have a tendency to slide off this consideration or hope to defer its resolution to future generations, and my sample size is quite small (a dozen maybe) and quite correlated.

For what it's worth, I recall this question being explicitly posed in at least a few of the EA in-depth fellowship curricula I've consumed or commented on, though I don't recall specifics and when I checked EA Cambridge's most recent curriculum I couldn't find it.

Replies from: Ben_West, Jacy
comment by Ben_West · 2022-07-06T02:09:43.383Z · EA(p) · GW(p)

My anecdata is also that most people have thought about it somewhat, and "maybe it's okay if everyone dies" is one of the more common initial responses I've heard to existential risk.

But I agree with OP that I more regularly hear "people are worried about negative outcomes just because they themselves are depressed" than "people assume positive outcomes just because they themselves are manic" (or some other cognitive bias).

comment by Jacy · 2022-07-07T13:10:58.455Z · EA(p) · GW(p)

This is helpful data. Two important axes of variation here are:

- Time, where this has fortunatley become more frequently discussed in recent years
- Involvement, where I speak a lot with artificial intelligence and machine learning researches who work on AI safety but not global priorities research; often their motivation was just reading something like Life 3.0. I think these people tend to have thought through crucial considerations less than, say, people on this forum.

comment by DonyChristie · 2022-06-30T18:53:02.269Z · EA(p) · GW(p)

I like "quality risks" (q-risks?) and think this is more broadly appealing to people who don't want to think about suffering-reduction as the dominantly guiding frame for whatever reason. Moral trade can be done with people concerned with other qualities, such as worries about global totalitarianism due to reasons independent of suffering such as freedom and diversity.

It's also relatively more neglected than the standard extinction risks, which I am worried we are collectively Goodharting on as our focus (and to a lesser extent, focus on classical suffering risks may fall into this as well). For instance, nuclear war or climate change are blatant and obvious scary problems that memetically propagate well, whereas there may be many q-risks to future value that are more subtle and yet to be evinced.

Tangentially, this gets into a broader crux I am confused by: should we work on obvious things or nonobvious things? I am disposed towards the latter.

comment by Mauricio · 2022-06-30T21:25:30.067Z · EA(p) · GW(p)

Thanks for the thoughtful post! I agree this is a very important question, I sympathize [EA · GW] with the view that people overweight some arguments for historical optimism, and I'm mostly on board with the list of considerations. Still, I think your associated EV calculation has significant weaknesses, and correcting for these seems to produce much more optimistic results.

• You put the most weight on historical harms, and you also put a lot of weight on empirical utility asymmetry. But arguably, the future will be deeply different from the past (including through reduced influence of biological evolution), so simple extrapolation from the past or present should not receive very high weight. (For the same reason, we should also downweight historical progress.)
• Arguably, historical harms have occurred largely through the divergence of agency and patiency, so counting both is mostly double-counting. (Similarly, historical progress has largely occurred through the other mechanisms that are already covered.) So we should further downweight these.
• I don't see why we should put negative weight on "The Nature of Digital Minds, People, and Sentience."
• Reasoned cooperation should arguably receive significantly more weight as an argument for optimism; moral trade should allow altruists to substantially mitigate suffering and increase well-being, especially as human tools for shaping the world and efficiently coordinating continue to improve.
• Editing the calculation to account for all of the above (and ignoring my other, more minor quibbles), we reach a fairly optimistic result (especially if we're looking at the more relevant logarithmic sum).

(Edited to account for how downweighting history should mean downweighting historical progress, not just downweighting historical harms, and for phrasing tweaks.)

Replies from: Jacy
comment by Jacy · 2022-07-01T00:10:29.618Z · EA(p) · GW(p)

It's great to know where your specific weights differ! I agree that each of the arguments you put forth are important. Some specifics:

• I agree that differences in the future (especially the weird possibilities like digital minds and acausal trade) is a big reason to discount historical evidence. Also, by these lights, some historical evidence (e.g., relations across huge gulfs of understanding and ability like from humans to insects) seems a lot more important than others (e.g., the fact that animal muscle and fat happens to be an evolutionarily advantageous food source).
• I'm not sure if I'd agree that historical harms have occurred largely through divergence; there are many historical counterfactuals that could have prevented many harms: the nonexistence of humans, an expansion of the moral circle, better cooperation, discovery of a moral reality, etc.. In many cases, a positive leap in any of these would have prevented the atrocity.  What makes divergence more important? I would make the case based on something like "maximum value impact from one standard deviation change" or "number of cases where harm seemed likely but this factor prevented it." You could write an EA Forum post going into more detail on that. I would be especially excited for you to go through specific historical events and do some reading to estimate the role of (small changes in) each of these forces.
• As I mention in the post, reasons to put negative weight on DMPS include the vulnerability of digital minds to intrusion, copying, etc., the likelihood of their instrumental usefulness in various interstellar projects, and the possibility of many nested minds who may be ignored or neglected.
• I agree moral trade is an important mechanism of reasoned cooperation.

I'm really glad you put your own numbers in the spreadsheet! That's super useful. The ease of flipping the estimates from negative to positive and positive to negative is one reason I only make the conclusion "not highly positive" or "close to zero" and not going with the mean estimate from myself and others (which would probably be best described as moderately negative, e.g., the average at an EA meetup where I presented this work was around -10).

I think your analysis is on the right track to getting us better answers to these crucial questions :)

Replies from: Mauricio, Jamie_Harris
comment by Mauricio · 2022-07-01T00:38:57.939Z · EA(p) · GW(p)

Thanks! Responding on the points where we may have different intuitions:

• Regarding your second bullet point, I agree there are a bunch of things that we can imagine having gone differently historically, where each would have been enough to make things go better. These other factors are all already accounted for, so putting the weight on historical harms/progress again still seems to be double-counting (even if which thing it's double-counting isn't well-defined).
• Regarding your third bullet point, thanks for flagging those points - I don't think I buy that any of them are reasons for negative weight.
• Intrusions could be harmful, but there could also be positive analogues.
• Duplication, instrumental usefulness, and nested minds are just reasons to think there might be more of these minds, so these considerations only seem net negative if we already have other reasons to assume these minds' well-being would be net negative (we may have such reasons, but I think these are already covered by other factors, so counting them here seems like double-counting)
• (As long as we're speculating about nested minds: should we expect them to be especially vulnerable because others wouldn't recognize them as minds? I'm skeptical; it seems odd to assume we'll be at that level of scientific progress without having learned how experiences work.)
• On interpretation of the spreadsheet:
• I think (as you might agree) that results should be taken as suggestive but far from definitive. Adding things up fails to capture many important dynamics of how these things work (e.g., cooperation might not just create good things but also separately counteract bad things).
• Still, insofar as we're looking at these results, I think we should mostly look at the logarithmic sum (because some dynamics of the future could easily be far more important than others).
• As I suggested, I have a few smaller quibbles, so these aren't quite my numbers (although these quibbles don't really matter if we're looking at the logarithmic sum).
Replies from: Jacy
comment by Jacy · 2022-07-07T13:21:02.783Z · EA(p) · GW(p)

Thanks for going into the methodological details here.

I think we view "double-counting" differently, or I may not be sufficiently clear in how I handle it. If we take a particular war as a piece of evidence, which we think fits into both "Historical Harms" and "Disvalue Through Intent," and it is overall -8 evidence on the EV of the far future, but it seems 75% explained through "Historical Harms" and 25% explained through "Disvalue Through Intent," then I would put -6 weight on the former and -2 weight on the latter. I agree this isn't very precise, and I'd love future work to go into more analytical detail (though as I say in the post, I expect more knowledge per effort from empirical research).

I also think we view "reasons for negative weight" differently. To me, the existence of analogues to intrusion does not make intrusion a non-reason. It just means we should also weigh those analogues. Perhaps they are equally likely and equal in absolute value if they obtain, in which case they would cancel, but usually there is some asymmetry. Similarly, duplication and nesting are factors that are more negative than positive to me, such as because we may discount and neglect the interests of these minds because they are more different and more separated from the mainstream (e.g., the nested minds are probably not out in society campaigning for their own interests because they would need to do so through the nest mind—I think you allude to this, but I wouldn't dismiss it merely because we'll learn how experiences work, such as because we have very good neuroscientific and behavioral evidence of animal consciousness in 2022 but still exploit animals).

Your points on interaction effects and nonlinear variation are well-taken and good things to account for in future analyses. In a back-of-the-envelope estimate, I think we should just assign values numerically and remember to feel free to widely vary those numbers, but of course there are hard-to-account-for biases in such assignment, and I think the work of GJP, QURI, etc. can lead to better estimation methods.

Replies from: Mauricio
comment by Mauricio · 2022-07-08T01:15:41.126Z · EA(p) · GW(p)

I think we're on a similar page regarding double-counting--the approach you describe seems like roughly what I was going for. (My last comment was admittedly phrased in an overly all-or-nothing way, but I think the numbers I attached suggest that I wasn't totally eliminating the weight on history.)

On whether we see "reasons for negative weight" differently, I think that might be semantic--I had in mind the net weight, as you suggest (I was claiming this net weight was 0). The suggestion that digital minds might be affected just by their being different is a good point that I hadn't been thinking about. (I could imagine some people speculating that this won't be much of a problem because influential minds will also eventually tend to be digital.) I tentatively think that does justify a mildly negative weight on digital minds, with the other factors you mention seeming to be fully accounted for in other weights.

comment by Jamie_Harris · 2022-07-16T09:02:21.743Z · EA(p) · GW(p)

I also put my intuitive scores into a copy of your spreadsheet.

In my head, I've tended to simplify the picture into essentially the "Value Through Intent" argument vs the "Historical Harms" argument, since these seem liked the strongest arguments in either direction to me. In that framing, I lean towards the future being weakly positive.

But this post is a helpful reminder that there are various other arguments pointing in either direction (which, in my case, overall push me towards a less optimistic view). My overall view still seems pretty close to zero at the moment though.

Also interesting how wildly different each of our scores are. Partly I think this might be because I was quite confused/worried about double-counting. Also maybe just not fully grasping some of the points listed in the post.

comment by Oliver Sourbut · 2022-06-30T15:46:13.918Z · EA(p) · GW(p)

I've considered a possible pithy framing of the Life Despite Suffering question as a grim orthogonality thesis (though I'm not sure how useful it is):

We sometimes point to the substantial majority's revealed preference for staying alive as evidence of a 'life worth living'. But perhaps 'staying-aliveness' and 'moral patient value' can vary more independently than that claim assumes. This is the grim orthogonality thesis.

An existence proof for the 'high staying-aliveness x low moral patient value' quadrant is the complex of torturer+torturee, which quite clearly can reveal a preference for staying alive, while quite plausibly being net negative value.

Can we rescue the correlation of revealed 'staying-aliveness' preference with 'life worth livingness'?

We can maybe reason about value from the origin of moral patients we see, without having a physical theory of value. All the patients we see at present are presumably products of natural selection. Let's also assume for now that patienthood comes from consciousness.

Two obvious but countervailing observations

• to the extent that conscious content is upstream of behaviour but downstream of genetic content, natural selection will operate on conscious content to produce behaviour which is fitness-correlated
• if positive conscious content produces attractive behaviour (and vice versa), we might anticipate that an organism 'doing well' according to suitable fitness-correlates would be experiencing positive conscious content
• this seems maybe true of humans?
• to the extent that behaviour is downstream of non-conscious control processes, natural selection will operate on non-conscious control processes to produce behaviour which is fitness-correlated
• we can not rule out experiences 'not worth living' which nevertheless produce net revealed staying-aliveness preference, if the behaviour is sufficiently under non-conscious control, or if the selection for behaviour downstream of negative conscious experience is weak
• weak selection is especially likely in novel out-of-distribution situations
• in general, organisms which reveal preferences for not staying alive will never be ceteris paribus fitter (though there are special cases of course)

For non-naturally-selected moral patients, I think even the above bets are basically off.

comment by Metztli · 2022-06-30T19:38:30.999Z · EA(p) · GW(p)

comment by abukeki · 2022-06-30T14:34:08.281Z · EA(p) · GW(p)

Look into suffering-focused AI safety which I think is extremely important and neglected (and s-risks [? · GW]).

Replies from: Mauricio
comment by Mauricio · 2022-07-01T01:16:47.634Z · EA(p) · GW(p)

More specifically, I think there's a good case to be made* that most of the expected disvalue of suffering risks comes from cooperation failures, so I'd especially encourage people who are interested in suffering risks and AI to look into cooperative AI and cooperation on AI. (These are areas mentioned in the paper you cite and in related writing.)

(*Large-scale efforts to create disvalue seem like they would be much more harmful than smaller-scale or unintentional actions, especially as technology advances. And the most plausible reason I've heard for why such efforts might happen is that: various actors might commit to creating disvalue under certain conditions, as a way to coerce other agents, and they would then carry out these threats if the conditions come about. This would leave everyone worse off than they could have been, so it is a sort of cooperation failure. Sadism seems like less of a big deal in expectation, because many agents have incentives to engage in coercion, while relatively few agents are sadists.)

(More closely related to my own interest in them, cooperation failures also seem like one of the main types of things that may prevent humanity from creating thriving futures, so this seems like an area that people with a wide range of perspectives on the value of the future can work together on :)

comment by seanrson (seanrichardson@outlook.com) · 2022-06-30T14:01:45.510Z · EA(p) · GW(p)

I think considerations like these are important to challenge the recent emphasis on grounding x-risk (really, extinction risk) in near-term rather than long-term concerns. That perspective seems to assume that the EV of human expansion is pretty much settled, so we don’t have to engage too deeply with more fundamental issues in prioritization, and we can instead just focus on marketing.

I’d like to see more written directly comparing the tractability and neglectedness of population risk reduction and quality risk reduction. I wonder if you’ve perhaps overstated things in claiming that a lower EV for human expansion suggests shifting resources to long-term quality risks rather than, say, factory farming. It seems like this claim requires a more detailed comparison between possible interventions.

comment by sapphire (deluks917) · 2022-07-05T17:11:39.012Z · EA(p) · GW(p)

I find the following simple argument disturbing:

P1 - Currently, and historically, low power being (animals, children, old dying people) are treated very cruelly if treating them cruelly benefits the powerful even in minor ways. Weak benefits for the powerful empirically justify cruelty at scale.
P2 - There is no good reason to be sure the powerful wont have even minor reasons to be cruel to the powerless (ex: suffering sub-routines, human CEV might include spreading earth like life widely or respect for tradition)
P3 - Inequality between agents is likely to become much more extreme as AI develops
P4 - The scale of potenital suffering will increase by many orders of magnitude

C1 - We are fucked?

Personal Note - There is also no reason to assume me or my loved ones will remain relatively powerful beings

C2 - Im really fucked!

Replies from: Davidmanheim
comment by Davidmanheim · 2022-07-06T17:11:12.025Z · EA(p) · GW(p)

Currently, and historically, low power being (animals, children, old dying people) are treated very cruelly if treating them cruelly benefits the powerful even in minor ways.

This is true, but far less true recently than in the past, and far less true in the near past than in the far past. That trajectory seems between somewhat promising and incredibly good - we don't have certainty, but I think the best guess is that in fact, it's true that the arc of history bends towards justice.

comment by Charlie_Guthmann (Charles_Guthmann) · 2022-06-30T17:57:38.584Z · EA(p) · GW(p)

Thanks for the post.  I've also been surprised how little this is discussed, even though the value of x-risk reduction is almost totally conditional to the answer to this question (the EV of the future conditional on human/progeny survival). Here are my big points to bring up re this issue, though some might be slight rephrasing of yours.

1. Interpersonal comparisons of utility canonically [EA · GW] have two parts - a definition of utility, of which every sentient being is measured by. Then, to compare and sum utility, one must pick (subjective) weights for each sentient being, scale their utility by the weights, and add everything up. (u_1x_1+.....+u_nx_n). If we don't agree on the weights, it's possible that one person may think the future be in expectation positive while another thinks it will be negative even w/ perfect information of what the future will look like. It could be even harder to agree on the weights of sentient beings when we don't even know what agents are going to be alive. We have obvious candidates for general rules about how to weight utility (brain size, pain receptors, etc.) but who knows how are conceptions of these things will change.
2. Basically repeating your last point in the chart but it's really important so I'll reiterate. Like everything else normative, there is no objective "0" line, no non-arbitrary point at which life is worth living. It is a decision we have to make. Moreover, I don't see any agreement in this community on the specific point where life is worth living. It is pretty obvious that disagreement about this could flip the sign of the EV of the future.
3. "Alien Counterfactuals". I actually mentioned this in a comment to a previous post [EA · GW] where someone said we should mostly just call longtermism x-risk(extremely wrong in my opinion).  First, for simplicity, let's just assume humans become grabby. [? · GW] If we become grabby, a question of specific interest to us should be, what characteristics do our society and species have relative to other grabby societies/species? Are we going to be better or worse gatekeepers of the future than the other gatekeepers of the future?  I'm pretty sure we should take the prior that we display the mean characteristics of a grabby civilization (interested in hearing if others disagree). If this is the case, then, again for simplicity, assuming(for simplicity) that our lightcone will be populated by aliens whether or not we specifically become grabby, x-risk reduction could be argued to have exactly 0 expected value, as we have no reason to believe that we are going to do a better job with the future than aliens. Evidence updating against the prior would probably take the form of arguments about why our specific evolutionary or economic history was a weird way to become grabby, not an easy task. Of course even with all the simplifying assumptions I've made, it's not so simple. Even if we have the mean characteristics of all the other grabby civilizations, adding more civilizations to the mix can change the game theory of space wars and governance. Still, it's not clear if more or less players is better.                                                                                                                                                                                                                                                                                                                                                                                                    I talked to a few people in EA about 'alien counterfactuals', and they all seemed to dismiss the argument, thinking that humans are better than "rolling the dice" on a new grabby civilization. No one provided arguments that were super convincing though. The most convincing counter argument I heard was  that it is very unlikely that grabby aliens will actually end up existing in our lightcone, subverting the whole argument. AI makes this argument significantly more confusing but it's not worth getting into without further ironing out of the initial arguments.
4. And then this is sort of the whole point of your post but I will reiterate - predicting the future is extremely difficult. We should have very little confidence in what it will be like. Predicting whether the future will be good or bad (given that we have already ironed out the normative considerations, which we haven't) is probably easier than predicting the future but still seems really difficult. The burden of evidence is on us to prove the future will be good, not on other people to prove it will be bad. After all, we are pumping huge amounts of money into creating impact which is completely conditional on this information. I've found posts like this one to be the only type of things that even feel tractable, and if that is the level of specificity we are at, it truly does feel like we have been p. wagered on this issue. posts like this one [? · GW] that you mentioned ultimately don't have nearly enough firepower to serve as anything more than an exploration of what a full argument would look like.
comment by Ben_West · 2022-07-06T02:21:56.736Z · EA(p) · GW(p)

The thing I have most changed my mind about since writing the post of mine you cite is adjacent to the "disvalue through evolution" category: basically, I've become more worried that disvalue is instrumentally useful. E.g. maybe the most efficient paperclip maximizer is one that's really sad about the lack of paperclips.

There's some old writing on this by Carl Shulman and Brian Tomasik; I would be excited for someone to do a more thorough write up/literature review for the red teaming contest (or just in general).

comment by MikeJohnson · 2022-07-05T11:27:57.209Z · EA(p) · GW(p)

As a small comment, I believe discussions of consciousness and moral value tend to downplay the possibility that most consciousness may arise outside of what we consider the biological ecosystem.

It feels a bit silly to ask “what does it feel like to be a black hole, or a quasar, or the Big Bang,” but I believe a proper theory of consciousness should have answers to these questions.

We don’t have that proper theory. But I think we can all agree that these megaphenomena involve a great deal of matter/negentropy and plausibly some interesting self-organized microstructure- though that’s purely conjecture. If we’re charting out EV, let’s keep the truly big numbers in mind (even if we don’t know how to count them yet).

Replies from: Guy Raveh
comment by Oliver Sourbut · 2022-06-30T15:13:46.219Z · EA(p) · GW(p)

Typo hint:

"10<sup>38</sup>" hasn't rendered how you hoped. You can use <dollar>10^{38}<dollar> which renders as

Replies from: tseyipfai@gmail.com, Oliver Sourbut, Jacy
comment by Fai (tseyipfai@gmail.com) · 2022-07-01T17:25:14.219Z · EA(p) · GW(p)

Maybe another typo? : "Bostrom argues that if humanizes could colonize the Virgo supercluster", should that be "humanity" or "humans"?

Replies from: Jacy
comment by Jacy · 2022-07-01T17:49:10.119Z · EA(p) · GW(p)

Good catch!

comment by Oliver Sourbut · 2022-07-04T08:39:27.694Z · EA(p) · GW(p)

It looks like I got at least one downvote on this comment. Should I be providing tips of this kind in a different way?

comment by Jacy · 2022-06-30T15:20:21.409Z · EA(p) · GW(p)

Whoops! Thanks!

comment by Harrison Durland (Harrison D) · 2022-07-01T15:24:06.933Z · EA(p) · GW(p)

I think that this large argument / counterargument table is a great example of how using a platform like Kialo to better structure debates could be valuable.

comment by ElliotJDavies · 2022-07-01T10:31:39.468Z · EA(p) · GW(p)

I think you have undervalued optionality value. Using Ctrl + F I have tried to find and summarise your claims against optionality value:

• EA only has a modest amount of "control" [ I assuming control = optionality ]
•  EA won't retain much "control" over the future
• The argument for option value is based on circular logic
•  Counterpoint, short x-risk timelines would be good from the POV of someone making an optionality value argument
• Counterpoint, optionality would be more important if alien's exist and propagate negative value
•  humans existing limits option value similar [question, by similar do you mean equal to?] to that of non-existence
• We can't raise x-risk after we've lowered it

Without having thought about this for very long, I think the argument against optionality needs to be really really strong. Since you essentially need to demonstrate we have equal or better decision making abilities right now, than at any point in the future.

One of the reasons optionality seems like an exceptionally good argument, is that uncertainty exists both inside and outside EV models (i.e. you can model EV, and include some uncertainty, but then you need to account for uncertainty around the entire EV model because you've likely made a ton of assumptions during the process). And it's extremely likely this uncertainty would remain constant overtime. One way we try to improve our models of the world is by making predictions and seeing if we were correct. The two reasons we do this are: making predictions is hard (so it's test for a model that's hard to pass) , and we have more information in the future.

The argument against optionality seems borderline tautological, because you essentially have to round all optionality value to 0, meaning the value of making predictions (and all over science, philosophy ect.) is also 0.

I am basically making a fanatical argument here for optionality, whereby the only  consideration that trumps it is opportunity cost.

comment by Bridges · 2022-06-30T17:57:46.838Z · EA(p) · GW(p)

Thanks for doing this work but I dont have the patience to read entirely. What is it you found exactly? please put at the top of the summary

Replies from: MichaelStJules
comment by MichaelStJules · 2022-06-30T18:03:16.310Z · EA(p) · GW(p)

In the associated spreadsheet, I list my own subjective evidential weight scores where positive numbers indicate evidence for +EV and negative numbers indicate evidence for -EV. It is helpful to think through these arguments with different assignment and aggregation methods, such as linear or logarithmic scaling. With different methodologies to aggregate my own estimates or those of others, the total estimate is highly negative around 30% of the time, weakly negative 40%, and weakly positive 30%. It is almost never highly positive. I encourage people to make their own estimates, and while I think such quantifications are usually better than intuitive gestalts, all such estimates should be taken with golf balls of salt.[5] [EA(p) · GW(p)]

Replies from: Bridges
comment by Bridges · 2022-07-06T01:52:57.515Z · EA(p) · GW(p)

yeah I still don't know what this means. What is the granny version pitch?

comment by Ben_West · 2022-07-01T04:23:38.147Z · EA(p) · GW(p)

Minor technical comment: the links to subsections in the topmost table link to the Google docs version of the article, and I think it would be slightly nicer if they linked to the forum post version.

Replies from: Jacy
comment by Jacy · 2022-07-01T04:59:27.061Z · EA(p) · GW(p)

Thanks! Fixed, I think.

comment by Noah Scales · 2022-06-30T21:23:02.304Z · EA(p) · GW(p)

You wrote

"There is a substantial philosophical literature on such topics that I will not wade into, and I believe such non-value-based arguments can be mapped onto value-based arguments with minimal loss (e.g., not having a duty to make happy people can be mapped onto there being no value in making happy people)."

Duty to accomplish X implies much more than an assessment of the value of X. To lack the (moral, legal, or ethical) obligation to bring about a state of affairs does not imply a sense that the state of affairs has no value to you or others.