Posts

some concerns with classical utilitarianism 2020-11-14T09:29:22.544Z

Comments

Comment by nil (eFish) on Investing to Give Beginner Advice? · 2021-07-04T18:46:34.055Z · EA · GW
  1. For a new investor, I think a simple and good method is getting a Vanguard Lifestrategy ISA with 100% equities - this buys you stocks across lots of different markets.

Does anyone know if there's an ISA (Individual Savings Account) w/ a fund that doesn't invest in meat and dairy companies and companies that test on animals? (I know that I can open an ISA on something like Trade 212 and invest in individual stocks myself. But due to having more important things to work on, I'm looking for a more "invest-and-forget" type of investing.)

Comment by nil (eFish) on Propose and vote on potential EA Wiki entries · 2021-06-28T22:15:44.075Z · EA · GW

Thanks, Pablo. The criteria will help to avoid some future long disputes (and thus save time for more important things), although it wouldn't have prevented my creating the entry for David Pearce, for he does fit the second condition, I think. (We disagree, I know.)

Comment by nil (eFish) on The unthinkable urgency of suffering · 2021-06-27T01:10:37.938Z · EA · GW

(I observed downvotes from 10 to 5. Is there anything that controversial in or about the post?..)

Comment by nil (eFish) on The unthinkable urgency of suffering · 2021-06-27T00:57:58.932Z · EA · GW

Imagine how it would change humanity's priorities if each day, "just" for a minute, each human adult experienced the worst suffering occurring that day on the planet (w/o going psychotic afterwards somehow). (And, for the reasons outlined in the post, we probably underestimate how much that torturous mind-"broadcasting" would change humanity's lived-out ethics.)

Comment by nil (eFish) on Kardashev for Kindness · 2021-06-12T00:53:37.064Z · EA · GW

The slow (if not revese) progress towards a world without intense suffering is depressing, to say the least. So thank you for writing this inspiring piece.

It aslo reminded me of David Pearce's essay "High-tech Jainism". It outlines a path towards civilization that abolished suffering while also warns about potential pitfalls like forgetting about suffering too soon, before it's prevented for all sentient beings. (In Suffering-Focused Ethics: Defense and Implications (ch. 13) mentioned in the post, Vinding even argues that, given the irreducible uncertainty about suffering re-emerging in the future, there's always risk in disconnecting from suffering completely.)

Comment by nil (eFish) on Constructive Criticism of Moral Uncertainty (book) · 2021-06-04T22:50:05.890Z · EA · GW

The more pessimistic argument is that moral progress arises as a function of economic and technological progress, and can't occur in isolation. We didn't give up slaves until it was economically convenient to do so, and likely won't give up meat until we have cost and flavor competitive alternatives.

FWIW this assessment seems true to me, at least for eating non-human animals, for I don't know enough about the economic drives behind slavery. (If one is interested, there's a report by the Sentience Institute on the topic, titled "Social Movement Lessons From the British Antislavery Movement: Focused on Applications to the Movement Against Animal Farming ".)

It's tempting to wash away our past atrocities under the guise of ignorance, but I'm worried humanity just knowingly does the wrong thing.

I would put it something like "as a rule, we do what is most convenient to us".

And I would also like to add that even if one causes terrible suffering "knowingly", there's still the irreducible ignorance of being disconnected from the first-hand experiencing of that suffering, I think. I.e, yes, we can say that one "knows" that one is causing extreme suffering, yet if one knew what this suffering is really like (i.e. if one experienced it on "oneself"), one wouldn't do it. (Come to think of it, this would also reduce one's moral uncertainty by the way.)

Comment by nil (eFish) on Propose and vote on potential EA Wiki entries · 2021-06-01T18:28:47.698Z · EA · GW

I didn't have in mind to sound harsh. Thanks for pointing this out: it now seems obvious to me that that part sounds uncharitable. I do appologise, belatedly :(

What I meant is that currently these new, evolving inclusion criteria are difficult to find. And if they are used in dispute resolutions (from this case onwards), perhaps they should be referenced for contributors as part of the introduction text, for example.

Comment by nil (eFish) on Propose and vote on potential EA Wiki entries · 2021-05-31T21:51:02.804Z · EA · GW

Perhaps voting on cases where there is a disagreement could achieve a wider inclusiveness or at least less controversy? Voters would be e.g. the moderators (w/ an option to abstain) and several persons who are familiar w/ the work of a proposed person.

It may also help if inclusion criteria are more specific and are not hidden until a dispute arises.

Comment by nil (eFish) on Propose and vote on potential EA Wiki entries · 2021-05-31T16:32:52.302Z · EA · GW

I should have been more clear about Drexler: I don't dispute that he is “connected to EA to a significant degree”. But so is Pearce, in my view, for the reasons outlined in this thread.

Comment by nil (eFish) on Propose and vote on potential EA Wiki entries · 2021-05-30T20:50:36.863Z · EA · GW

Chalmers and Hassabis fall under the category of "people who have attained eminence in their fields and who are connected to EA to a significant degree". Drexler, and perhaps also Chalmers, fall under the category of "academics who have conducted research of clear EA relevance".

First, I want to make it clear that I don’t question that any of the persons I listed in my previous comment should be removed from the wiki. I just disagree that not including Pearce is justified.

Again, I honestly don’t think that it is true that Chalmers and Drexler are “connected to EA to a significant degree” while Pearce isn’t. Especially Chalmers: from what I know, he isn’t engaged w/ effective altruism, besides once agreeing for being interviewed at the 80,000 Hours podcast.

As for the “attained eminence in their fields” condition, I do see that it may be harder to resolve for Pearce’s case since he isn’t an academic but rather an independent philosopher, writer, and advocate. But if Pearce’s field as suffering abolitionism, then the “attained eminence in their fields” condition does hold, in my view: he both is the founder of the “abolitionist project” and has written extensively on why’s and how’s of the project.

Also, as I mentioned in the original comment proposing the entry, Pearce’s work has inspired many EAs, including Brain Tomasik, the Qualia Research Institute’s Andrés Gómez Emilsson, and the Center for Reducing Suffering’s Magnus Vinding, and the nascent field of welfare/compassionate biology. Also, Invincible Wellbeing research group has been inspired by Pearce's work as well.

I don’t have any new arguments to make, and I don’t expect anyone involved to change their minds anyway. I only hope it may be worth time of others to contribute their perspectives on the dispute.

And as Michael suggests above, it may be more productive at this point to consider how many entries on EA-relevant persons are desirable in the first place.

Best regards,

nil

Comment by nil (eFish) on A list of EA-related podcasts · 2021-05-29T23:43:39.043Z · EA · GW
Comment by nil (eFish) on Propose and vote on potential EA Wiki entries · 2021-05-29T22:01:26.936Z · EA · GW

Thank you for appreciating the contribution.

Since Pablo is trusted w/ deciding on the issue, I will address my questions about the decision directly to him in this thread.

Comment by nil (eFish) on Propose and vote on potential EA Wiki entries · 2021-05-29T22:01:00.343Z · EA · GW

I'm sorry to hear this, Pablo, as I haven't been convinced that Pearce isn't relevant enough for effective altruism.

Also, I really don’t see how the persons below have contributed more or are more relevant to effective altruism than Pearce (that is not necessarily to say that their entities aren’t warranted!). May it be correct to infer that at least some of these entries received less scrutiny than Pearce’s nomination?

And perhaps:

After reviewing the discussion, and seeing that no new comments have been posted in the past five days, I've decided to delete the article, for the reasons I outlined previously.

May I ask why five days since the last comment were deemed enough for proceeding to the deletion? Is this part of the wiki’s rules? (If so, it must be my fault that I didn't have time to reply in time.)

I also wanted to say that despite the disagreement, I appreacite that the wiki has a team commiteed to it. 

Comment by nil (eFish) on Propose and vote on potential EA Wiki entries · 2021-05-28T20:28:19.110Z · EA · GW

For those who may want to see the deleted entry, I'm posting it below:


David Pearce is a philosopher and writer best known for his 1995 manifesto The Hedonistic Imperative and the associated ideas about abolishing suffering for all sentient life using biotechnology and other technologies.

Pearce argues that it is "technically feasible" and ethically rational to abolish suffering on the planet by replacing Darwinian suffering-based motivational systems with minds animated by "information-sensitive gradients of intelligent bliss" (as opposed to indiscriminate maxed-out bliss). He stresses that this "abolitionist project" is compatible with a diverse set of values and "intentional objects" (i.e. what one is happy "about").

In 1998 together with Nick Bostrom, Pearce co-founded the World Transhumanist Association, today known as Humanity+.

Pearce is the director of bioethics of Invincible Wellbeing and is on the advisory boards of the Organisation for the Prevention of Intense Suffering and since 2021 the Qualia Research Institute. He is also a fellow of the Institute for Ethics and Emerging Technologies and is on the futurist advisory board of the Lifeboat Foundation.

Comment by nil (eFish) on Propose and vote on potential EA Wiki entries · 2021-05-20T17:06:06.967Z · EA · GW

... I think the reason for The Hassenfeld Exception is that, as far as I'm aware, the vast majority of his work has been very connected with GiveWell. So it's very important and notable, but doesn't need a distinct entry. Somewhat similar with Tegmark inasmuch as he relates to EA, though he's of course notable in the physics community for non-FLI-related reasons. ...

This makes sense to me, although one who is more familiar w/ their work may find their exclusion unwarranted. Thanks for clarifying!

In this light I still think an entry for Pearce is justified, to a degree scientifically grounded proposals for abolishing suffering is an EA topic (and this is the main theme of Pearce's work). But I'm just one input of course.

Regarding Tomasik, we have different intuitions here: if an entry for Tomasik may not be justified, then I would say this sets a high bar which only original EA founders could reach. (For Tomasik himself is a founder of an EA charity - the Foundational Research Institute / Center on Long-Term Risk - has written extensively on many topics highly relevant to EA, and an advisor at the Center for Reducing Suffering, another EA org.) Anyway, this difference doesn't probably matter in practice since you added that you lean towards Tomasik's having an entry.

Comment by eFish on [deleted post] 2021-05-19T22:09:14.129Z

deleted

Comment by eFish on [deleted post] 2021-05-19T22:08:25.665Z

I'll propose the tag on that page ...

Done.

Comment by nil (eFish) on Propose and vote on potential EA Wiki entries · 2021-05-19T22:04:10.888Z · EA · GW

David Pearce (the tag will be removed if others think it’s not warranted)

Arguments against:

  • One may see David Pearce much more related to transhumanism (even if to the most altruistic “school” of transhumanism) than to EA (see e.g. Pablo’s comment).
  • Some of Pearce’s ideas goes against certain established notions in EA: e.g. he thinks sentience of classical digital computers is impossible under the known laws of physics, that minimising suffering should take priority over increasing happiness of the already well-off, that environmental interventions alone, w/o raising individuals’ hedonic setpoints and making these individuals invincible to severe suffering, cannot solve the problem of suffering and achieve sustainable high wellbeing for all.

I also should mention that I’m biased in proposing this tag, as Pearce’s work played a major role in my becoming an EA.

Arguments for:

  • For over 25 years David Pearce has been researching and writing about addressing the root cause of suffering on the planet using bio/nano/info/robo technology.
  • Pearce has been raising awareness about, and proposing solutions for, wild-animal suffering at least since 1995.
  • Several relatively prominent EAs cite Pearce’s work as having major influence on their values, including Brain Tomasik, the Qualia Research Institute’s Andrés Gómez Emilsson, and the Center for Reducing Suffering’s Magnus Vinding. Another recognition of Pearce’s work is his being invited as a speaker for EA Global: Melbourne 2015.
  • Unlike most other transhumanists, Pearce is antispeciesist and advocates using technology to benefit all sentient life.
Comment by eFish on [deleted post] 2021-05-16T14:28:01.591Z

Hi Pablo,

I'll propose the tag on that page, for I do think that a tag for David Pearce is justified (and if it isn't, then I might question some existing tags for EA persons).

Comment by nil (eFish) on What harm could AI safety do? · 2021-05-15T12:23:38.879Z · EA · GW

This is not on direct harm, but if AI risks are exaggerated to a degree that the worst scenarios are not even possible, then a lot of EA talent might be wasted.

For those who are skeptical about AI skepticism may be interested in reading Magnus Vinding's "Why Altruists Should Perhaps Not Prioritize Artificial Intelligence: A Lengthy Critique".

Comment by nil (eFish) on AMA: Tobias Baumann, Center for Reducing Suffering · 2021-04-17T16:31:52.903Z · EA · GW

David Pearce, a negative utilitarian, is the founding figure for [suffering abolition].

It might be of interest for some that Pearce is/was skeptical about possibility or probability of s-risks related to digital sentience and space colonization: see his reply to What does David Pearce think about S-risks (suffering risks)? on Quora (where he also mentions the moral hazard of "understanding the biological basis of unpleasant experience in order to make suffering physically impossible").

Comment by nil (eFish) on Notes on EA-related research, writing, testing fit, learning, and the Forum · 2021-04-04T00:23:27.423Z · EA · GW

Thanks for sharing, Michael!

I think the Center for Reducing Suffering's Open Research Questions may be a helpful addition to Research ideas. (Do let me know if you think otherwise!)

Relatedly, CRS has an internship opportunity.

Also, perhaps this is intentional but "Readings and notes on how to do high-impact research" is repeated twice in the list.

Comment by nil (eFish) on AMA: JP Addison and Sam Deere on software engineering at CEA · 2021-03-16T16:25:34.218Z · EA · GW

I saw only this old repo and assumed the Forum wasn't open source any more. Sorry for not looking further.

Comment by nil (eFish) on AMA: JP Addison and Sam Deere on software engineering at CEA · 2021-03-14T15:07:38.375Z · EA · GW

Has the team considered making the Forum open-source* and accepting code contributions from the community and others? What are the reasons for keeping the code repository private? Thank you!

* As far as I know, the EA Forum is not open-source, although it is based on Less Wrong platform, which is open-source.

Comment by nil (eFish) on Important Between-Cause Considerations: things every EA should know about · 2021-01-28T21:27:35.349Z · EA · GW

Thanks for doing this work!

Are we living in a simulation?

For IMO a cogent argument against the possibility that we live in a (digitally) simulated universe, please consider adding Gordon McCabe's paper "Universe creation on a computer".

Comment by nil (eFish) on What are some potential coordination failures in our community? · 2021-01-05T21:06:35.822Z · EA · GW

There's a new free open-source alternative called Logseq ("inspired by Roam Research, Org Mode, Tiddlywiki, Workflowy and Cuekeeper").

Comment by nil (eFish) on edoarad's Shortform · 2020-12-28T20:48:22.249Z · EA · GW

For those who won't read the paper, the phenomenon is called pluralistic ignorance (Wikipedia):

... is a situation in which a majority of group members privately reject a norm, but go along with it because they assume, incorrectly, that most others accept it.

Comment by nil (eFish) on Ask Rethink Priorities Anything (AMA) · 2020-12-14T15:50:42.778Z · EA · GW
  • If one is only concerned w/ preventing needless suffering, prioritising the most extreme suffering, would donating to Rethink Priorities be a good investment for them, and if so, how so?
  • What new charities do you want to be created by EAs?
  • What are the biggest mistakes Rethink Priorities did?

Thank you!

Comment by nil (eFish) on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2020-11-18T21:09:35.484Z · EA · GW

In the real world, maybe we're alone. The skies look empty. Cynics might point to the mess on Earth and echo C.S. Lewis: "Let's pray that the human race never escapes from Earth to spread its iniquity elsewhere." Yet our ethical responsibility is to discover whether other suffering sentients exist within our cosmological horizon; establish the theoretical upper bounds of rational agency; and assume responsible stewardship of our Hubble volume. Cosmic responsibility entails full-spectrum superintelligence: to be blissful but not "blissed out" - high-tech Jainism on a cosmological scale. We don't yet know whether the story of life has a happy ending.

-- David Pearce, "High-Tech Jainism"

Comment by nil (eFish) on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2020-11-18T21:05:14.723Z · EA · GW

There’s ongoing sickening cruelty: violent child pornography, chickens are boiled alive, and so on. We should help these victims and prevent such suffering, rather than focus on ensuring that many individuals come into existence in the future. When spending resources on increasing the number of beings instead of preventing extreme suffering, one is essentially saying to the victims: “I could have helped you, but I didn’t, because I think it’s more important that individuals are brought into existence. Sorry.”

-- Simon Knutsson, "The One-Paragraph Case for Suffering-Focused Ethics"

Comment by nil (eFish) on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2020-11-18T21:00:23.998Z · EA · GW

If humanity is to minimize suffering in the future, it must engage with the world, not opt out of it.

-- Magnus Vinding (2015), Anti-Natalism and the Future of Suffering: Why Negative Utilitarians Should Not Aim For Extinction

Comment by nil (eFish) on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2020-11-18T20:56:10.699Z · EA · GW

[T]rue hedonic engineering, as distinct from mindless hedonism or reckless personal experimentation, can be profoundly good for our character. Character-building technologies can benefit utilitarians and non-utilitarians alike. Potentially, we can use a convergence of biotech, nanorobotics and information technology to gain control over our emotions and become better (post-)human beings, to cultivate the virtues, strength of character, decency, to become kinder, friendlier, more compassionate: to become the type of (post)human beings that we might aspire to be, but aren't, and biologically couldn't be, with the neural machinery of unenriched minds. Given our Darwinian biology, too many forms of admirable behaviour simply aren't rewarding enough for us to practise them consistently: our second-order desires to live better lives as better people are often feeble echoes of our baser passions. Too many forms of cerebral activity are less immediately rewarding, and require a greater capacity for delayed gratification, than their lowbrow counterparts. Likewise, many forms of altruistic behaviour ... are less rewarding than personal consumption.

-- David Pearce, Can Biotechnology Abolish Suffering?, "Utopian Neuroscience"

Comment by nil (eFish) on some concerns with classical utilitarianism · 2020-11-17T19:39:09.985Z · EA · GW

Thanks for the specific examples. I hope some of 80,000 Hours' staff members and persons who took 80,000 Hours' passage on the asymmetry for granted will consider your criticism too.

Comment by nil (eFish) on some concerns with classical utilitarianism · 2020-11-17T17:52:38.020Z · EA · GW

As I say in the text, I understand the appeal of CU. But I'd be puzzled if we accept CU without modifications (I give some in the text, like Mendola's "ordinal modification" and Wolf’s “Impure Consequentialist Theory of Obligation” as well as implying a CU based on an arguably more sophisticated model of suffering and happiness than the one-dimensional linear model).

Worse than being counterintuitive, IMO, is giving a false representation of the reality: e.g. talking about "great" aggregate happiness or suffering where no one experiences anything of significance or holding the notion of "canceling out" suffering with happiness elsewhere. (I concur with arguably many EAs in the respect that a kind of sentiocentric consequentialism could be the most plausible ethics.)

BTW some prominent defenders of suffering-focused ethics - such as Mayerfeld and Wolf mentioned in the text - hold a pluralistic account of ethics (Vinding, 2020, 8.1), where things besides suffering and happiness have an intrinsic value. (I personally still fail to understand in what sense such intrinsic values that are not reducible to suffering or happiness can obtain.)

Comment by nil (eFish) on some concerns with classical utilitarianism · 2020-11-17T15:59:42.464Z · EA · GW

I'd also add the Very Repugnant Conclusion as a case for which I haven't heard a satisfying CU defense.

A defense of accepting or rejecting the Very Repugnant Conclusion (VRC) [for those who don't know, here's a full text (PDF) which defines both Conclusions in the introduction]? Accepting VRC would be required by CU, in this hypothetical. So, assuming CU, rejecting VRC would need justification.

it's quite hard to reject the idea that between (a) 1 million people experiencing a form of pain just slightly weaker than the threshold of "extreme" suffering, and (b) 1 person experiencing pain just slightly stronger than that threshold, (b) is the lesser evil.

Perhaps so. On the other hand, as Vinding also writes (ibid, 5.6; 8.10), the qualitative difference between extreme suffering and suffering that could be extreme if we push a bit further may be still be huge. So, "slightly weaker" would not apply to the severity of suffering.

Also, irrespective of whether the above point is true, one (as Taurek did as I mention in the text) argue that (a) is still less bad than (b), for no one in (a) suffers a much as the one in (b).

... in general I think aggregation in axiology is much more defensible than classical utilitarianism wholesale.

Here we might at least agree that some forms of aggregating are more plausible than others, at least in practice: e.g. intrapersonal vs interpersonal aggregating.

The utility monster as well seems asymmetric in how repugnant it is when you formulate it in terms of happiness versus suffering.

Vinding too brings up such a disutility monster in Suffering-Focused Ethics: Defense and Implications, 3.1, BTW:

... the converse scenario in which we have a _dis_utility monster whose suffering increases as more pleasure is experienced by beings who are already well-off, it seems quite plausible to say that the disutility monster, and others, are justified in preventing these well-off beings from having such non-essential, suffering-producing pleasures. In other words, while it does not seem permissible to impose suffering on others (against their will) to create happiness, it does seem justified to prevent beings who are well-off from experiencing pleasure (even against their will) if their pleasure causes suffering.

Comment by nil (eFish) on some concerns with classical utilitarianism · 2020-11-16T20:07:50.179Z · EA · GW

Thanks for the example!

I worry that even when our philosophical assumptions are stated (which is already a good place to be in), it is easy to miss their important implications and to not question whether these implications make sense (as opposed to jumping directly to cause selection). (This kind of rigor would arguably be over-demanding in most cases but could still be a health measure for EA materials.)

Comment by nil (eFish) on Physical theories of consciousness reduce to panpsychism · 2020-11-15T11:49:20.565Z · EA · GW

Thanks for the reply.

... my guess is that basically classical/non-quantum phenomena can be sufficient for consciousness, since the quantum stuff going on in our heads doesn't seem that critical and could be individually replaced with "classical" interactions while preserving everything else in the brain as well as our behaviour.

I'm not sure how to understand your "sufficient", since to our best knowledge the world is quantum, and the classical physics is only an approximation. (Quoting Pearce: "Why expect a false theory of the world, i.e. classical physics, to yield a true account of consciousness?".)

One reason Pearce needs quantum phenomena is the so-called binding problem of consciousness. For on Pearce's account, "phenomenal binding is classically impossible." IIRC the phenomenal binding is also what drives David Chalmers to dualism.

I would say substrate doesn't matter ...

It doesn't matter indeed on a physicalistic idealist account. But currently, as far as we know, only brains support phenomenal binding (as opposed to being mere "psychotic noise"), for the reason of a huge evolutionary advantage (to the replicating genes).

... non-materialist physicalism is also compatible with what many would recognize as panpsychism ...

Good point. Thanks :)

Comment by nil (eFish) on Physical theories of consciousness reduce to panpsychism · 2020-11-14T21:52:04.480Z · EA · GW

Thanks for writing the post!

Since you write:

... I’m not claiming panpsychism is true, although this significantly increases my credence in it ...

I'm curious what is your relative credence in non-materialist, "idealistic" physicalism if you're familiar with it? One contemporary account I'm most familiar with is David Pearce's "physicalistic idealism" ("an experimentally testable conjecture" that "that reality is fundamentally experiential and that the natural world is exhaustively described by the equations of physics and their solutions") (see also Pearce's popular explanation of his views in a Quora post). David Hoffman's "Consciousness Realism" would be another example (I haven't looked deeply into his work).

One can argue that idealistic physicalism is more parsimonious (by being a monistic physicalism) and thus more likely to be true(r) than panpsychism (which assumes property dualism). Panpsychism, on the other hand, may be more intuitive and more familiar to researchers these days, which may explain why it's discussed more(?) these days compared to non-materialist physicalism.

Comment by nil (eFish) on Intro to Consciousness + QRI Reading List · 2020-04-08T16:12:18.516Z · EA · GW

Thanks for crossposting the list on the Forum.

The 1st recommendation (Consciousness Realism: The Non-Eliminativist Physicalist View of Consciousness by Magnus Vinding) touches on the limits of physical simulations (:

More generally, there is no guarantee that a simulation of something, no matter how much information it includes of that something, will have the same properties as the thing being simulated. [...]

).

For those who may be interested in the topic, consider Gordon McCabe's Universe creation on a computer. The paper elaborates on the limits of (digital) simulations of physical systems, bringing, IMO, healthy skepticism about the simulation hypothesis (and thus about the possibility of "simulated" minds).

Comment by nil (eFish) on What are the key ongoing debates in EA? · 2020-04-02T15:53:00.646Z · EA · GW

One such a debate is how (un)important doing "AI safety" now is. See, for example, Center on Long-Term Risk's (previously known as Foundational Research Institute) Lukas Gloor’s Altruists Should Prioritize Artificial Intelligence and Magnus Vinding's "point-by-point critique" of Gloor's essay in Why Altruists Should Perhaps Not Prioritize Artificial Intelligence: A Lengthy Critique.

Comment by nil (eFish) on Launching Utilitarianism.net: An Introductory Online Textbook on Utilitarianism · 2020-04-01T12:43:14.687Z · EA · GW

Good point. Thank you.

Even classical utilitarianism can belong to the umbrella term of suffering-focused ethics if its supporters agree that we should still focus on reducing suffering in practice (for its neglectedness, relative easiness of prevention, as a common ground with other ethical views, etc).

Comment by nil (eFish) on Launching Utilitarianism.net: An Introductory Online Textbook on Utilitarianism · 2020-03-31T16:48:40.000Z · EA · GW

Negative utilitarianism (NU) isn't mentioned anywhere on the website, AFAIS. This ethical view has quite a few supporters among thinkers, and unlike classical utilitarianism (CU) NU appears satiable ("maximize happiness" vs "minimize misery"). There are subtypes like weak NU (lexical NU and lexical threshold NU), consent-based NU, and perhaps OPIS' "xNU+".

Are there reasons for the omission?

Comment by nil (eFish) on Tips for overcoming low back pain · 2020-03-30T22:08:03.508Z · EA · GW

What worked in my case (of lower back pain that I had suffered for many years starting from a high-school on) is standing 90-95% of my working time (at an improvised standing desk, and an adjustable one at my last job). (I exercised in both cases, although those included mostly push ups, running, and, most helpfully, pulling up.)

Comment by nil (eFish) on Scientists’ attitudes towards improving the welfare of animals in the wild: a qualitative study · 2020-03-22T20:36:16.967Z · EA · GW

The study can be downloaded here

The link to the full report is missing.

Comment by nil (eFish) on Surveying attitudes towards helping wild animals among scientists and students · 2020-03-22T20:32:52.736Z · EA · GW

It can be insightful to read the particular obstacles (found in the full report in Results) scholars and students thought of to the three interventions.

(Similarly for the previous, qualitative study: as a comment to that previous report says, "Those quotes capture elements of the interviewees' thinking that are difficult to summarize.")

Comment by nil (eFish) on What are some software development needs in EA causes? · 2020-03-07T13:41:40.996Z · EA · GW

What I saw recently is mobile development (in a co-founder role) for a sleep-aid app and web development (WP plugins, DB architecture, API integration), full-time, for the Social Science Prediction Platform.

Check also EA Work Club and the Effective Altruism Job Postings and Effective Altruism Volunteering FB groups.

Also I would try ask Effective Thesis if they can connect you with the right persons. (They connect students with suitable thesis projects, but I imagine they can serve your case too if they know relevant connections.)

Thanks for your initiative!

Comment by nil (eFish) on Genetic Enhancement as a Cause Area · 2019-12-27T14:37:02.670Z · EA · GW

[...] UBI as a cause I think may rank above genetic enhancement [...]

I would counter that genetic enhancement would be the only cause that could address the root problem - the biology of suffering itself. Environmental interventions, in contrast, are ultimately limited by the "hedonic treadmill" effect (that is not to say, of course, that the worst cases like factory farming and extreme poverty should not be solved ASAP).

Comment by nil (eFish) on Genetic Enhancement as a Cause Area · 2019-12-26T20:35:55.409Z · EA · GW

Thanks for bringing up the topic!

In the long term, I believe selecting embryos for favorable traits will happen anyway, regardless of ethical qualms, because once the technology has been demonstrated, countries unwilling to adopt it will risk falling far behind.

Another reason how selecting embryos may become a norm is that, as the technology matures, parents will eventually have a choice to have at least a slightly higher hedonic set-point for their children. Why would they choose not to have happier children? Presumably, more positive children are more fun to raise and are expected to be successful in life. So, over time, psychological pain may be genetically eliminated / reduced. See perhaps this line of argument in David Pearce’s The Reproductive Revolution.

Also, “short-term” improvement in well-being can be seen as the long-termist’s goal too, as WMD, which are expected to be much more available in future, are arguably less likely to be used by “life lovers”.

Comment by nil (eFish) on Eight high-level uncertainties about global catastrophic and existential risk · 2019-12-03T23:07:09.075Z · EA · GW
  1. Expected value of the future

I just wanted to mention the possibility of so-called suffering risks or s-risks, which IMO should loom large in our trying to meaningfully assess the expected value of the future. (Although, even if the future is negative on some assessment, it may still be better to avert x-risks to preserve intelligence and promote compassion for intense suffering in the expectation that the intelligence will guard against suffering that would re-emerge in the absence of the intelligence (the way it "emerged" in the past).)

Comment by nil (eFish) on Interview with Michael Tye about invertebrate consciousness · 2019-08-08T12:18:18.679Z · EA · GW

Thank you for doing this, Max (and the supporters). These are good questions that warrant their own book =)

I find this passage making a particularly good point, so I quote it below for those skipped that part:

In the case of hermit crabs, we find the relevant behavioral pattern. So, we may infer that, like us, they feel pain. To be sure, they have many fewer neurons. But why should we think that makes a difference to the presence of pain? It didn’t make any difference with respect to the complex pattern of behavior the crabs display in response to noxious stimuli. Why should it make any difference with respect to the cause of that behavior? It might, of course. There is no question of proof here. But that isn’t enough to overturn the inference.


We need to look more closely at invertebrate behavior and see whether and how much it matches ours with respect to a range of experiences—bodily, perceptual and emotional.

Comparing with humans, I suppose, should come with many caveats. Still, for ancient(?) feelings like fear and pain, the approach seems valid to my layman perspective in the area.

Of course, if one endorsed a type identity theory for conscious mental states, according to which experiences are one and the same as specific physico-chemical brain states, that would give one a reason to deny that digital beings lacked consciousness. But why accept the type identity theory? Given the diversity of sentient organisms in nature, it is extremely implausible to hold that for each type of experience, there is a single type of brain state with which it is identical.

If (globally bound) consciousness is "implemented" on a lower level, then it still may be possible for different physico-chemical brain states for the same qualia to be relevantly identical on that lower level. I mention this because IMO there are good reasons to be sceptical about digital consciousness.

[...] it is is extremely implausible to hold that [...]

A typo