some concerns with classical utilitarianism 2020-11-14T09:29:22.544Z


Comment by nil (eFish) on AMA: Tobias Baumann, Center for Reducing Suffering · 2021-04-17T16:31:52.903Z · EA · GW

David Pearce, a negative utilitarian, is the founding figure for [suffering abolition].

It might be of interest for some that Pearce is/was skeptical about possibility or probability of s-risks related to digital sentience and space colonization: see his reply to What does David Pearce think about S-risks (suffering risks)? on Quora (where he also mentions the moral hazard of "understanding the biological basis of unpleasant experience in order to make suffering physically impossible").

Comment by nil (eFish) on Notes on EA-related research, writing, testing fit, learning, and the Forum · 2021-04-04T00:23:27.423Z · EA · GW

Thanks for sharing, Michael!

I think the Center for Reducing Suffering's Open Research Questions may be a helpful addition to Research ideas. (Do let me know if you think otherwise!)

Relatedly, CRS has an internship opportunity.

Also, perhaps this is intentional but "Readings and notes on how to do high-impact research" is repeated twice in the list.

Comment by nil (eFish) on AMA: JP Addison and Sam Deere on software engineering at CEA · 2021-03-16T16:25:34.218Z · EA · GW

I saw only this old repo and assumed the Forum wasn't open source any more. Sorry for not looking further.

Comment by nil (eFish) on AMA: JP Addison and Sam Deere on software engineering at CEA · 2021-03-14T15:07:38.375Z · EA · GW

Has the team considered making the Forum open-source* and accepting code contributions from the community and others? What are the reasons for keeping the code repository private? Thank you!

* As far as I know, the EA Forum is not open-source, although it is based on Less Wrong platform, which is open-source.

Comment by nil (eFish) on Important Between-Cause Considerations: things every EA should know about · 2021-01-28T21:27:35.349Z · EA · GW

Thanks for doing this work!

Are we living in a simulation?

For IMO a cogent argument against the possibility that we live in a (digitally) simulated universe, please consider adding Gordon McCabe's paper "Universe creation on a computer".

Comment by nil (eFish) on What are some potential coordination failures in our community? · 2021-01-05T21:06:35.822Z · EA · GW

There's a new free open-source alternative called Logseq ("inspired by Roam Research, Org Mode, Tiddlywiki, Workflowy and Cuekeeper").

Comment by nil (eFish) on edoarad's Shortform · 2020-12-28T20:48:22.249Z · EA · GW

For those who won't read the paper, the phenomenon is called pluralistic ignorance (Wikipedia):

... is a situation in which a majority of group members privately reject a norm, but go along with it because they assume, incorrectly, that most others accept it.

Comment by nil (eFish) on Ask Rethink Priorities Anything (AMA) · 2020-12-14T15:50:42.778Z · EA · GW
  • If one is only concerned w/ preventing needless suffering, prioritising the most extreme suffering, would donating to Rethink Priorities be a good investment for them, and if so, how so?
  • What new charities do you want to be created by EAs?
  • What are the biggest mistakes Rethink Priorities did?

Thank you!

Comment by nil (eFish) on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2020-11-18T21:09:35.484Z · EA · GW

In the real world, maybe we're alone. The skies look empty. Cynics might point to the mess on Earth and echo C.S. Lewis: "Let's pray that the human race never escapes from Earth to spread its iniquity elsewhere." Yet our ethical responsibility is to discover whether other suffering sentients exist within our cosmological horizon; establish the theoretical upper bounds of rational agency; and assume responsible stewardship of our Hubble volume. Cosmic responsibility entails full-spectrum superintelligence: to be blissful but not "blissed out" - high-tech Jainism on a cosmological scale. We don't yet know whether the story of life has a happy ending.

-- David Pearce, "High-Tech Jainism"

Comment by nil (eFish) on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2020-11-18T21:05:14.723Z · EA · GW

There’s ongoing sickening cruelty: violent child pornography, chickens are boiled alive, and so on. We should help these victims and prevent such suffering, rather than focus on ensuring that many individuals come into existence in the future. When spending resources on increasing the number of beings instead of preventing extreme suffering, one is essentially saying to the victims: “I could have helped you, but I didn’t, because I think it’s more important that individuals are brought into existence. Sorry.”

-- Simon Knutsson, "The One-Paragraph Case for Suffering-Focused Ethics"

Comment by nil (eFish) on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2020-11-18T21:00:23.998Z · EA · GW

If humanity is to minimize suffering in the future, it must engage with the world, not opt out of it.

-- Magnus Vinding (2015), Anti-Natalism and the Future of Suffering: Why Negative Utilitarians Should Not Aim For Extinction

Comment by nil (eFish) on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2020-11-18T20:56:10.699Z · EA · GW

[T]rue hedonic engineering, as distinct from mindless hedonism or reckless personal experimentation, can be profoundly good for our character. Character-building technologies can benefit utilitarians and non-utilitarians alike. Potentially, we can use a convergence of biotech, nanorobotics and information technology to gain control over our emotions and become better (post-)human beings, to cultivate the virtues, strength of character, decency, to become kinder, friendlier, more compassionate: to become the type of (post)human beings that we might aspire to be, but aren't, and biologically couldn't be, with the neural machinery of unenriched minds. Given our Darwinian biology, too many forms of admirable behaviour simply aren't rewarding enough for us to practise them consistently: our second-order desires to live better lives as better people are often feeble echoes of our baser passions. Too many forms of cerebral activity are less immediately rewarding, and require a greater capacity for delayed gratification, than their lowbrow counterparts. Likewise, many forms of altruistic behaviour ... are less rewarding than personal consumption.

-- David Pearce, Can Biotechnology Abolish Suffering?, "Utopian Neuroscience"

Comment by nil (eFish) on some concerns with classical utilitarianism · 2020-11-17T19:39:09.985Z · EA · GW

Thanks for the specific examples. I hope some of 80,000 Hours' staff members and persons who took 80,000 Hours' passage on the asymmetry for granted will consider your criticism too.

Comment by nil (eFish) on some concerns with classical utilitarianism · 2020-11-17T17:52:38.020Z · EA · GW

As I say in the text, I understand the appeal of CU. But I'd be puzzled if we accept CU without modifications (I give some in the text, like Mendola's "ordinal modification" and Wolf’s “Impure Consequentialist Theory of Obligation” as well as implying a CU based on an arguably more sophisticated model of suffering and happiness than the one-dimensional linear model).

Worse than being counterintuitive, IMO, is giving a false representation of the reality: e.g. talking about "great" aggregate happiness or suffering where no one experiences anything of significance or holding the notion of "canceling out" suffering with happiness elsewhere. (I concur with arguably many EAs in the respect that a kind of sentiocentric consequentialism could be the most plausible ethics.)

BTW some prominent defenders of suffering-focused ethics - such as Mayerfeld and Wolf mentioned in the text - hold a pluralistic account of ethics (Vinding, 2020, 8.1), where things besides suffering and happiness have an intrinsic value. (I personally still fail to understand in what sense such intrinsic values that are not reducible to suffering or happiness can obtain.)

Comment by nil (eFish) on some concerns with classical utilitarianism · 2020-11-17T15:59:42.464Z · EA · GW

I'd also add the Very Repugnant Conclusion as a case for which I haven't heard a satisfying CU defense.

A defense of accepting or rejecting the Very Repugnant Conclusion (VRC) [for those who don't know, here's a full text (PDF) which defines both Conclusions in the introduction]? Accepting VRC would be required by CU, in this hypothetical. So, assuming CU, rejecting VRC would need justification.

it's quite hard to reject the idea that between (a) 1 million people experiencing a form of pain just slightly weaker than the threshold of "extreme" suffering, and (b) 1 person experiencing pain just slightly stronger than that threshold, (b) is the lesser evil.

Perhaps so. On the other hand, as Vinding also writes (ibid, 5.6; 8.10), the qualitative difference between extreme suffering and suffering that could be extreme if we push a bit further may be still be huge. So, "slightly weaker" would not apply to the severity of suffering.

Also, irrespective of whether the above point is true, one (as Taurek did as I mention in the text) argue that (a) is still less bad than (b), for no one in (a) suffers a much as the one in (b).

... in general I think aggregation in axiology is much more defensible than classical utilitarianism wholesale.

Here we might at least agree that some forms of aggregating are more plausible than others, at least in practice: e.g. intrapersonal vs interpersonal aggregating.

The utility monster as well seems asymmetric in how repugnant it is when you formulate it in terms of happiness versus suffering.

Vinding too brings up such a disutility monster in Suffering-Focused Ethics: Defense and Implications, 3.1, BTW:

... the converse scenario in which we have a _dis_utility monster whose suffering increases as more pleasure is experienced by beings who are already well-off, it seems quite plausible to say that the disutility monster, and others, are justified in preventing these well-off beings from having such non-essential, suffering-producing pleasures. In other words, while it does not seem permissible to impose suffering on others (against their will) to create happiness, it does seem justified to prevent beings who are well-off from experiencing pleasure (even against their will) if their pleasure causes suffering.

Comment by nil (eFish) on some concerns with classical utilitarianism · 2020-11-16T20:07:50.179Z · EA · GW

Thanks for the example!

I worry that even when our philosophical assumptions are stated (which is already a good place to be in), it is easy to miss their important implications and to not question whether these implications make sense (as opposed to jumping directly to cause selection). (This kind of rigor would arguably be over-demanding in most cases but could still be a health measure for EA materials.)

Comment by nil (eFish) on Physical theories of consciousness reduce to panpsychism · 2020-11-15T11:49:20.565Z · EA · GW

Thanks for the reply.

... my guess is that basically classical/non-quantum phenomena can be sufficient for consciousness, since the quantum stuff going on in our heads doesn't seem that critical and could be individually replaced with "classical" interactions while preserving everything else in the brain as well as our behaviour.

I'm not sure how to understand your "sufficient", since to our best knowledge the world is quantum, and the classical physics is only an approximation. (Quoting Pearce: "Why expect a false theory of the world, i.e. classical physics, to yield a true account of consciousness?".)

One reason Pearce needs quantum phenomena is the so-called binding problem of consciousness. For on Pearce's account, "phenomenal binding is classically impossible." IIRC the phenomenal binding is also what drives David Chalmers to dualism.

I would say substrate doesn't matter ...

It doesn't matter indeed on a physicalistic idealist account. But currently, as far as we know, only brains support phenomenal binding (as opposed to being mere "psychotic noise"), for the reason of a huge evolutionary advantage (to the replicating genes).

... non-materialist physicalism is also compatible with what many would recognize as panpsychism ...

Good point. Thanks :)

Comment by nil (eFish) on Physical theories of consciousness reduce to panpsychism · 2020-11-14T21:52:04.480Z · EA · GW

Thanks for writing the post!

Since you write:

... I’m not claiming panpsychism is true, although this significantly increases my credence in it ...

I'm curious what is your relative credence in non-materialist, "idealistic" physicalism if you're familiar with it? One contemporary account I'm most familiar with is David Pearce's "physicalistic idealism" ("an experimentally testable conjecture" that "that reality is fundamentally experiential and that the natural world is exhaustively described by the equations of physics and their solutions") (see also Pearce's popular explanation of his views in a Quora post). David Hoffman's "Consciousness Realism" would be another example (I haven't looked deeply into his work).

One can argue that idealistic physicalism is more parsimonious (by being a monistic physicalism) and thus more likely to be true(r) than panpsychism (which assumes property dualism). Panpsychism, on the other hand, may be more intuitive and more familiar to researchers these days, which may explain why it's discussed more(?) these days compared to non-materialist physicalism.

Comment by nil (eFish) on Intro to Consciousness + QRI Reading List · 2020-04-08T16:12:18.516Z · EA · GW

Thanks for crossposting the list on the Forum.

The 1st recommendation (Consciousness Realism: The Non-Eliminativist Physicalist View of Consciousness by Magnus Vinding) touches on the limits of physical simulations (:

More generally, there is no guarantee that a simulation of something, no matter how much information it includes of that something, will have the same properties as the thing being simulated. [...]


For those who may be interested in the topic, consider Gordon McCabe's Universe creation on a computer. The paper elaborates on the limits of (digital) simulations of physical systems, bringing, IMO, healthy skepticism about the simulation hypothesis (and thus about the possibility of "simulated" minds).

Comment by nil (eFish) on What are the key ongoing debates in EA? · 2020-04-02T15:53:00.646Z · EA · GW

One such a debate is how (un)important doing "AI safety" now is. See, for example, Center on Long-Term Risk's (previously known as Foundational Research Institute) Lukas Gloor’s Altruists Should Prioritize Artificial Intelligence and Magnus Vinding's "point-by-point critique" of Gloor's essay in Why Altruists Should Perhaps Not Prioritize Artificial Intelligence: A Lengthy Critique.

Comment by nil (eFish) on Launching An Introductory Online Textbook on Utilitarianism · 2020-04-01T12:43:14.687Z · EA · GW

Good point. Thank you.

Even classical utilitarianism can belong to the umbrella term of suffering-focused ethics if its supporters agree that we should still focus on reducing suffering in practice (for its neglectedness, relative easiness of prevention, as a common ground with other ethical views, etc).

Comment by nil (eFish) on Launching An Introductory Online Textbook on Utilitarianism · 2020-03-31T16:48:40.000Z · EA · GW

Negative utilitarianism (NU) isn't mentioned anywhere on the website, AFAIS. This ethical view has quite a few supporters among thinkers, and unlike classical utilitarianism (CU) NU appears satiable ("maximize happiness" vs "minimize misery"). There are subtypes like weak NU (lexical NU and lexical threshold NU), consent-based NU, and perhaps OPIS' "xNU+".

Are there reasons for the omission?

Comment by nil (eFish) on Tips for overcoming low back pain · 2020-03-30T22:08:03.508Z · EA · GW

What worked in my case (of lower back pain that I had suffered for many years starting from a high-school on) is standing 90-95% of my working time (at an improvised standing desk, and an adjustable one at my last job). (I exercised in both cases, although those included mostly push ups, running, and, most helpfully, pulling up.)

Comment by nil (eFish) on Scientists’ attitudes towards improving the welfare of animals in the wild: a qualitative study · 2020-03-22T20:36:16.967Z · EA · GW

The study can be downloaded here

The link to the full report is missing.

Comment by nil (eFish) on Surveying attitudes towards helping wild animals among scientists and students · 2020-03-22T20:32:52.736Z · EA · GW

It can be insightful to read the particular obstacles (found in the full report in Results) scholars and students thought of to the three interventions.

(Similarly for the previous, qualitative study: as a comment to that previous report says, "Those quotes capture elements of the interviewees' thinking that are difficult to summarize.")

Comment by nil (eFish) on What are some software development needs in EA causes? · 2020-03-07T13:41:40.996Z · EA · GW

What I saw recently is mobile development (in a co-founder role) for a sleep-aid app and web development (WP plugins, DB architecture, API integration), full-time, for the Social Science Prediction Platform.

Check also EA Work Club and the Effective Altruism Job Postings and Effective Altruism Volunteering FB groups.

Also I would try ask Effective Thesis if they can connect you with the right persons. (They connect students with suitable thesis projects, but I imagine they can serve your case too if they know relevant connections.)

Thanks for your initiative!

Comment by nil (eFish) on Genetic Enhancement as a Cause Area · 2019-12-27T14:37:02.670Z · EA · GW

[...] UBI as a cause I think may rank above genetic enhancement [...]

I would counter that genetic enhancement would be the only cause that could address the root problem - the biology of suffering itself. Environmental interventions, in contrast, are ultimately limited by the "hedonic treadmill" effect (that is not to say, of course, that the worst cases like factory farming and extreme poverty should not be solved ASAP).

Comment by nil (eFish) on Genetic Enhancement as a Cause Area · 2019-12-26T20:35:55.409Z · EA · GW

Thanks for bringing up the topic!

In the long term, I believe selecting embryos for favorable traits will happen anyway, regardless of ethical qualms, because once the technology has been demonstrated, countries unwilling to adopt it will risk falling far behind.

Another reason how selecting embryos may become a norm is that, as the technology matures, parents will eventually have a choice to have at least a slightly higher hedonic set-point for their children. Why would they choose not to have happier children? Presumably, more positive children are more fun to raise and are expected to be successful in life. So, over time, psychological pain may be genetically eliminated / reduced. See perhaps this line of argument in David Pearce’s The Reproductive Revolution.

Also, “short-term” improvement in well-being can be seen as the long-termist’s goal too, as WMD, which are expected to be much more available in future, are arguably less likely to be used by “life lovers”.

Comment by nil (eFish) on Eight high-level uncertainties about global catastrophic and existential risk · 2019-12-03T23:07:09.075Z · EA · GW
  1. Expected value of the future

I just wanted to mention the possibility of so-called suffering risks or s-risks, which IMO should loom large in our trying to meaningfully assess the expected value of the future. (Although, even if the future is negative on some assessment, it may still be better to avert x-risks to preserve intelligence and promote compassion for intense suffering in the expectation that the intelligence will guard against suffering that would re-emerge in the absence of the intelligence (the way it "emerged" in the past).)

Comment by nil (eFish) on Interview with Michael Tye about invertebrate consciousness · 2019-08-08T12:18:18.679Z · EA · GW

Thank you for doing this, Max (and the supporters). These are good questions that warrant their own book =)

I find this passage making a particularly good point, so I quote it below for those skipped that part:

In the case of hermit crabs, we find the relevant behavioral pattern. So, we may infer that, like us, they feel pain. To be sure, they have many fewer neurons. But why should we think that makes a difference to the presence of pain? It didn’t make any difference with respect to the complex pattern of behavior the crabs display in response to noxious stimuli. Why should it make any difference with respect to the cause of that behavior? It might, of course. There is no question of proof here. But that isn’t enough to overturn the inference.

We need to look more closely at invertebrate behavior and see whether and how much it matches ours with respect to a range of experiences—bodily, perceptual and emotional.

Comparing with humans, I suppose, should come with many caveats. Still, for ancient(?) feelings like fear and pain, the approach seems valid to my layman perspective in the area.

Of course, if one endorsed a type identity theory for conscious mental states, according to which experiences are one and the same as specific physico-chemical brain states, that would give one a reason to deny that digital beings lacked consciousness. But why accept the type identity theory? Given the diversity of sentient organisms in nature, it is extremely implausible to hold that for each type of experience, there is a single type of brain state with which it is identical.

If (globally bound) consciousness is "implemented" on a lower level, then it still may be possible for different physico-chemical brain states for the same qualia to be relevantly identical on that lower level. I mention this because IMO there are good reasons to be sceptical about digital consciousness.

[...] it is is extremely implausible to hold that [...]

A typo

Comment by nil (eFish) on Interview with Shelley Adamo about invertebrate consciousness · 2019-06-22T12:15:08.858Z · EA · GW

Thank you for the work, Max (et al.)!

The following are some comments/questions for anyone interested.

Evolutionary theory suggests that insects will be selected to have emotions if the benefits of having them are greater than the costs of generating them. However, the costs appear to be heavy, and the benefits seem minimal.

Could anyone point me to the evidence +/- emotions being “computationally” expensive? Are emotions computations at all?

Nervous systems are very expensive for animals.

Given that we don’t know the nature of phenomenology (“Unfortunately, we don’t know how we generate emotions.”), maybe emotions (or at least simpler feelings) are energetically cheap and simple enough for evolution to select them early in the history of life? I would again appreciate relevant good literature pointers.

I do not see why robots couldn’t have an internal experience (i.e. feelings) if their artificial neural networks had functionally the same type of connections as we use to produce emotions.

If e.g. the unique valence properties of the carbon atom are part of how (phenomenally-bound) consciousness happens, then the classical artificial neural networks cannot be functionally the same in the relevant sense.

Also, how does the fact that digital computations are interpretation-depended affect the possibility of digital consciousness? (A tangentially relevant paper worth sharing is Universe creation on a computer by Gordon McCabe.)

The human brain is the ‘swiss army knife’ of brains; we can use our cognition to do almost anything. We are not especially speedy, but we can build cars. We’re not great swimmers (like a dolphin), but we can build boats. We can’t fly, but we can build planes.

We as a distributed “intelligence”, yes. One human cannot do these things. I find this quote from Magnus Vinding’s Reflections on Intelligence illuminating on the (off-)topic:

“Human intelligence” is often compared to “chimpanzee intelligence” in a manner that presents the former as being so much more awesome than, and different from, the latter. Yet this is not the case. If we look at individuals in isolation, a human is hardly that much more capable than a chimpanzee. They are both equally unable to read and write on their own, not to mention building computers or flying to the moon. And this is also true if we compare a tribe of, say, thirty humans with a tribe of thirty chimpanzees. Such two tribes rule the Earth about equally little. What really separates humans from chimpanzees, however, is that humans have a much greater capacity for accumulating information, especially through language. And it is this – more precisely, millions of individuals cooperating with this, in itself humble and almost useless, ability – that enables humans to accomplish the things we erroneously identify with individual abilities: communicating with language, doing mathematics, uncovering physical laws, building things, etc. It is essentially this you can do with a human that you cannot do with a chimpanzee: train them to contribute modestly to society. To become a well-connected neuron in the collective human brain. Without the knowledge and tools of previous generations, humans are largely indistinguishable from chimpanzees.
Comment by nil (eFish) on What are people's objections to earning-to-give? · 2019-04-14T14:54:33.077Z · EA · GW

My E2G experience:

In the summer of 2016, I decided to continue self- and university training as a software engineer instead of switching to study biotechnology (I was accepted for two bachelor programs). The main influencers of the decision was my high uncertainty in having an impact in the latter case (relative to a much less risky E2G career), made more salient by my chronic depression.

Having been working as full-time software developer in Berlin since summer '18, I find the job^ de-moralizing, the environment of the "common" people intellectually toxic and my impact in terms of reducing suffering in the world (via E2G in this case) insignificant. I think, dedicated EAs can do better than spending their productive time like this.

If I manage my worsening mental health, I will quite this summer to find a better EA application of my life. (I would like to switch from software engineering to a different field, but due to my Russian citizenship and the official training, I don't yet see how I could get a visa in any promising country.)

^ The company does media monitoring SaaS business. I felt desperate enough to accept the offer.

Comment by nil (eFish) on EA Survey 2018 Series: Geographic Differences in EA · 2019-02-20T17:44:30.894Z · EA · GW

Thank you for the analysis!

Two formatting issues:

(Please, don't reply! =) )

Comment by nil (eFish) on EA Survey 2018 Series: Where People First Hear About EA and Influences on Involvement · 2019-02-19T19:13:22.382Z · EA · GW

Two formatting issues:

1. IV - Subscribers and Identifiers leads to a comment

2. VII- Group Membership should lead to

Comment by nil (eFish) on Cause profile: mental health · 2019-02-16T16:46:25.520Z · EA · GW

Footnote [40] is orphaned.

Comment by nil (eFish) on What Is Effective Altruism? · 2019-01-10T18:49:31.982Z · EA · GW
Various book titles define it as Doing Good Better or The Most Good You Can Do.

Or Effective Altruism: How Can We Best Help Others? ;)

Comment by nil (eFish) on Introduction to Effective Altruism Reading List · 2018-11-21T09:46:18.085Z · EA · GW

I usually recommend Magnus Vinding's Effective Altruism: How Can We Best Help Others?, sometimes even to people not new to the EA ideas: the ebook is short but dense and available for free.

This book is part introduction to, part reflective examination of, the idea and ideal of effective altruism. Its aim is to examine the question: how can we best help others? A question that in turn forces us to contemplate what helping others, effectively or otherwise, really entails. Here the book argues that the greatest help we can provide is to reduce extreme suffering for all sentient beings.
Comment by nil (eFish) on Current Estimates for Likelihood of X-Risk? · 2018-08-12T16:12:45.829Z · EA · GW

In Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks the author Phil Torres mentions (in addition to FHI's informal 2008 survey) that

"the philosopher John Leslie argues that we have a 30 percent chance of extinction in the next five centuries"

and that

"the cosmologist Martin Rees writes in a 2003 book that civilization has a 50-50 chance of surviving the present century."

The book also references Bulletin of the Atomic Scientists' Doomsday Clock, which is now (as of 2018) as close to "midnight" as it was in 1953.

Comment by nil (eFish) on How to have cost-effective fun · 2018-07-09T23:11:01.834Z · EA · GW

I would recommend to not consume alcohol at all. (In my experience, a total abstinence is much easier than trying to not exceed a small limit.)