What should we actually do in response to moral uncertainty?

post by John Bridge · 2022-05-30T09:48:14.234Z · EA · GW · 5 comments

This is a question post.

I'm aware of Moral Uncertainty and the moral parliament model, as well as this (uncomplete) sequence [? · GW] by MichaelA, but I'm not sure what concrete actions moral uncertainty entails. 

What specific actions should someone take if they are highly uncertain about the validity of different ethical theories?

Answers

answer by freedomandutility · 2022-05-30T10:38:55.136Z · EA(p) · GW(p)

Avoid the actions that are endorsed by your favoured moral theory but most severely violate other moral theories

comment by Yellow (Daryl) (Daryl D'Souza) · 2022-05-30T13:09:45.219Z · EA(p) · GW(p)

But for every action that is considered moral under one moral theory, there is an equal, and opposite moral theory that says that action is not moral. 

 

Maybe instead of just other moral theories, it would have to be a 'significant moral theory' based on some metric(like popularity?). But that has it's flaws too.

 

I think I may check out that book and sequence to get a feel of what's already been thought about on this subject

Replies from: Azure
comment by Azure · 2022-05-30T16:54:44.721Z · EA(p) · GW(p)

I think the idea is to assign credences to plausible theories, where plausible is taken to mean some subset of the following:

  • Has been argued for in good faith by professional philosophers
  • Has relevant and well-reasoned arguments in favour of it
  • Accords at least partially with moral intuitions
  • Is consistent/parsimonious/not metaphysically untoward/precise/ etc (the usual desiderata for explanations/theories)
  • Concerns the usual domain of moral theories(values, agents, decisions, etc)

Another equivalent way to proceed is to consider all possible theories, but the credence given to the (completely) implausible theories is 0 or sufficiently close to it.

answer by ofer · 2022-05-30T16:55:39.876Z · EA(p) · GW(p)

Probably something like striving for a Long Reflection [? · GW] process. (Due to complex cluelessness [? · GW] more generally, not just moral uncertainty.)

comment by Sharmake · 2022-05-31T17:45:34.503Z · EA(p) · GW(p)

The real issue is unrealistic levels of coordination and a assumption that moral objectivism is true. While it is an operating assumption in order to do anything in EA, that doesn't equal that's it's true.

5 comments

Comments sorted by top scores.