Competitive Ethics

post by mwcvitkovic · 2020-11-24T01:00:31.945Z · EA · GW · 7 comments


      If we're going to think hard about what's right, shouldn't we also think hard about what wins?
  How ethics compete
  Case studies
    and heritability
    and motivation
    vs diversity
  Extensions of competitive ethics

If antinatalists are right that having children is wrong, does it matter once the antinatalists go extinct?

If you build an "ethical" AI that keeps getting deleted by its "unethical" AI peers, have you accomplished your mission of building ethical AI?

Is religious tolerance a fatal flaw in liberal democracy if fecund, illiberal religions can always become a majority?

If we're going to think hard about what's right, shouldn't we also think hard about what wins?

Competitive ethics (I'd be happy to find a better term) is the study of ethics as strategies or phenotypes competing for mindshare rather than as statements about right and wrong.

Competitive ethics is to morality as FiveThirtyEight is to politics. FiveThirtyEight doesn't tell us which candidate's positions are correct, and we don't expect them to. We expect them to tell us who will win.

Unlike applied ethics ("How should I act in this specific situation?"), normative ethics ("What criteria should I use to do applied ethics?"), or meta-ethics ("How should I think about normative ethics?"), competitive ethics is amoral. Not immoral, amoral: it's not concerned with right and wrong, just with predictions and understanding.

No matter your normative ethical beliefs or your meta-ethics, competitive ethics matters to you in practical terms. Moral statements may or may not be true or meaningful, but people definitely act according to them.

How ethics compete

There are many lines of thinking relevant to this question, but I can't find any that address it directly.

The most relevant are cultural selection theory, memetics, and neoevolution, though these are far too tied up with evolutionary theory. ("Ethics" as I'm using the term encompasses things like religion, culture, norms, and values --- anything that guides people in how they say "yuck" or "yum.") The subfields of evolutionary ethics and game-theoretic ethics stick to normative or occasionally meta-ethical questions, and don't seem to have studied what happens when ethical systems go toe-to-toe.

An important distinction in thinking about how ethics compete is between the ethics people publicly espouse, the ethics they consciously believe, and the "revealed ethics" of what they actually do. All three are related, and all three can be distinct. Preference falsification, social contagion theory, and behavioral economics are the relevent disciplines here. Professed ethics are the fastest to change, a la preference falsification. It's an open question whether believed or revealed ethics are more mutable.

Another important issue is the fuzzy line between biologically-determined preferences and ethics. The former clearly influence the latter in a single individual, and the latter influences the former across generations. Plus, the more technology lets us intervene on biology, the fuzzier the line gets. Wibren Van Der Berg's Dynamic Ethics is the closest work to addressing this, though it's a work of normative ethics. E.g. when he says "Our dynamic society requires a dynamic morality and thus a form of ethical reflection which can be responsive to change." A few others have touched this question, but not many.

Case studies

Natalism and heritability

The most straightforward way ethical systems compete is by the degree of natalism and heritability they entail: how many offspring do they lead to in their believers, and how effectively are they passed from parents to children?

The best recent work on this topic is from demographers like Eric Kaufmann. In his book Shall the Religious Inherit the Earth?, Kaufmann lays out the remarkable growth trends of religious fundamentalist groups in the modern world. Fundamentalist religious groups with ethics encouraging high fertility and strict adherence to the religion are contrasted with modern Western cultures with ethics that deride fertility (e.g. certain environmentalist ethics) and encourage freedom of thought.

Most fundamentalist groups rely on the generosity of the society at large to flourish as they do (e.g. the ultraorthodox in Israel, who generally don't have jobs), so it's not clear when this will hit the breaking point. Nevertheless, these trends raise questions about the viability of non-natalist ethics. From my probably-biased perspective, I suspect ethics of free thought are more attractive than fundamentalist ethics. I hear more about people leaving fundamentalist religions than joining them. But ethics of free thought combined with low fertility may not be sustainable.

Nihilism and motivation

I know of no work studying the comparative effects of ethical belief systems on motivation. In fact, I don't know whether it's demonstrable that motivated individuals are more successful. But assuming it does, and assuming ethics like moral nihilism demotivate people (or at least fail to motivate them), the long-term viability of these ethical systems is questionable. Going further, it may be that selfish ethical systems (e.g. Ayn Rand, Gordon Gekko) are more associated with motivation and success than egalitarian ethical systems.

Causality and correlation are hard to tease apart here, but doing so isn't necessary. An ethical system can win both by granting success to its holders or by being adopted by successful individuals.

Ethnonationalism vs diversity

Crudely put: it feels better to be in an exclusive group, but inclusive groups are bigger. Which matters more?

AI alignment

Eliezer Yudkowsky is purported to have said "You are personally responsible for becoming more ethical than the society you grew up in." This quotation is interesting in that (1) it's a normative claim about normative claims, and (2) it assumes that ethics has a direction.

While I like the sentiment, it's reminiscent of when lay people say things like "humans are more evolved than snails" and make biologists cringe because evolution doesn't have partial ordering by which some species can be more or less evolved than others. From the competitive ethics perspective, neither do ethics.

Most people who work in AI alignment treat human values [LW · GW] the way engineers treat nature: there is an underlying true human ethics, and while we can't articulate it, we can still try to hue to it. But if you build an “ethical” AI that keeps getting deleted by its “unethical” AI peers, have you accomplished your mission of building ethical AI?

I'm not able to join the AI alignment discussion until AI alignment researchers start putting competitive ethical questions more front and center.

Extensions of competitive ethics

Competitive ethics on its own is amoral. But it can be a building block for other ideas.

Consider a meta-ethics --- call it ethical consistentism maybe --- where the probability of a moral statement being correct is proportional to its survival. To be clear: this isn't a creepy social Darwinism or might-makes-right idea since it's a meta-ethics, not a normative claim. Or one could propose a a weaker version of this: an ethical system shouldn't directly or indirectly lead to itself not being believed. This is analagous to logical consistency in mathematics. Of course, if we're going to treat ethical systems as competitive phenotypes, it seems only fair to treat meta-ethical systems (ethical consistentism included) as phenotypes too. So the recursion begins...

Competitive ethics is also sortof nihilism 2.0. Of course right and wrong are ridiculous concepts, so what? That’s the start of the conversation, not the end.

Despite searching quite a bit, I can't find any content on EA forum related to these ideas. But I'm sure there is some. Please let me know if you know any!


Comments sorted by top scores.

comment by Cienna · 2020-11-28T04:24:16.866Z · EA(p) · GW(p)

Thanks for the fascinating post! This inspired me to arrange a discussion with some philosophical Meetup friends who have had similar thoughts in this direction. Anyone interested is welcome to join the conversation! It will be Sunday, November 29th at 7 PM ET.

Call link:

Meetup description:

comment by Cienna · 2020-11-30T02:14:55.724Z · EA(p) · GW(p)

The call has now concluded. 8 participants, 2 hours, one great topic. Thank you again, mwcvitkovic [EA · GW]! 

comment by Mason · 2020-11-29T20:14:17.608Z · EA(p) · GW(p)

Interesting post! You've inspired me to reflect a bit on my own internal competitive ethicist (I think to some degree we all have one; mine is named Jeff and he's very not fun at parties):

Despite all the big important topical connections, I found myself thinking back to being in high school, a freshly minted vegan, trying to back-of-the-envelope the amount of animal suffering indirectly inflicted by (hypothetically!) choosing to be a Preachy Vegan vs. approaching things with a more laid-back, live-and-let-live attitude.

One way to frame that exercise is to say that I was considering adopting two competing ethics: (1) logging onto to sign up for the Meat is Murder mailing list, complete with swag, a list of cool edgy bands to get into, and the warm fuzzy feeling of fighting  for a cause, vs. (2) sitting quietly in the corner and finishing my salad. Alternatively, you could say that I had one ethic, reducing animal suffering to the extent that I could, taking into account the dynamics of my social network and my (comically miniscule but theoretically existent) capacity to influence it, and that I was just working out the details. The reality, I think, was somewhere in between.

[Note to readers: For the next 3 paragraphs I'll be spending 3 of my 10 insufferable dork points allotted monthly by the cap-and-trade system.]

For the sake of this reply, I'll define an "ethic" to be a single memetic unit of incentivisation or disincentivisation, e.g., "be disgusted at those who do not love their country", or "be pleased with yourself when you feel you've shown intellectual humility". Using this definition, you can think of an ethic as a distributed controller, in the control theoretic sense, exerting influence on human behavior.

As I see it, functioning communities depend on a fault-tolerant array of often-redundant ethics. The Torah contains 613 commandments. Ethical common sense in the United States is a region-dependent cocktail with ingredients ranging from Judeo-Christianity to American patriotism to liberal universalism to local customs. Shared moral goals ("reduce suffering") act as robust high-level controllers, but may be inefficient tools for navigating day-to-day interactions. Practical ethics ("don't eat shellfish") may be easier to follow and/or spread, but they don't tend to transfer as well to new environments. Mid-level ethics ("conceptualize the existence of something called 'rights', and then respect them") serve as a bridge between ideals and habits.

From this perspective, individual ethics compete, but less for dominance than to occupy one of many complementary niches. Ethics packages, however, especially those from the big global brands, often include some mechanism for vendor lock-in, generally marketed as "antivirus software" on the tin (blasphemy laws, etc.).

A few observations based on this framing:

1) The fitness of an ethic depends on the culture in which it is deployed. (Normatively) good ethics or ethics packages should engage with human impulses to be compassionate, judicious, well respected, and purposeful, but I can have an amoral preference for these impulses to be engaged with without being exploited, and that preference can be culturally reinforced. An "addictive" ethical system can be off-putting in the same way the idea of using "the world's most addictive meditation app" is off-putting... so long as you can spot it.

2) One way for an ethic to "win" is to be high-compatibility, traveling alongside the dominant ethical systems like a remora. For example, the ethic of psychological health: In my city, with a bit of googling, you can find Christian psychologists, non-denominational spiritual psychologists, and secular psychologists all counseling clients towards similar goals. And while the definition of psychological health can vary from clinician to clinician, the concepts of introspection, bad-habit-breaking, setting developmentally appropriate expectations for children, boundary-formation, the cultivation of healthy relationships, and self-actualization are widely shared.

3) The ethic of conscientious objection plays a special role in the marketplace of ethics. The extent to which you can repeal a low-level ethic by appealing to a higher-level ethic should affect the extent to which the stock prices of practical ethics (ew; I'm showering after this) are tied to those of the high-level principles that support them. This can be both good—e.g., leveraging Christian principles for the civil rights movement—or bad—e.g., rejecting the value of liberalism because you're in favor of a law that only seems to get passed in authoritarian countries. Weirdly, you might expect that the more open a group is to polishing up its moral code, the more credible it will seem for a moral idea could be "dangerous", since the project of avoiding hypocrisy makes the fates of all of the ethics in the ecosystem more interdependent.

comment by wuschel · 2020-11-24T14:54:28.653Z · EA(p) · GW(p)

Interesting Idea. Although I fear we might not like what we find....

comment by mwcvitkovic · 2020-11-24T17:43:55.666Z · EA(p) · GW(p)

All the more reason to look? Unless there's an information hazard here

comment by wuschel · 2020-11-24T14:52:06.722Z · EA(p) · GW(p)

I do find that interesting.  Research projects that come to mind:

Gene sequence 200 Philosophy grad students. 100 consequentialists, 100 deontologists. See if you find any trends.

Than take 100 14 year olds. Gene sequence them, and try predicting utilitarian leaning, and deontologist leaning. 

Than confront thee 14 year olds with arguments for and against Consequentialism and Deontology. 

See if you predicted correctly significantly.