Common ground for longtermists

post by Tobias_Baumann · 2020-07-29T10:26:50.727Z · score: 50 (24 votes) · EA · GW · 7 comments

Contents

  Improving values
  Compromise rather than conflict
  Foresight and prudence
  Improving our political system
  Conclusion
None
7 comments

(Crossposted from the Center for Reducing Suffering.)

Many in the longtermist [EA · GW] effective altruism community focus on achieving a flourishing future for humanity by reducing the risk of early extinction or civilisational collapse (existential risks). Others, inspired by suffering-focused ethics, prioritise the prevention of worst-case outcomes with astronomical amounts of suffering. This has sometimes led to tensions between effective altruists, especially over the question of how valuable pure extinction risk reduction is.[1]

Despite these differences, longtermist EAs still have a lot in common. This common ground lies in our shared interest in improving the long-term future (assuming civilisation does not go extinct).[2] In this post, I'll argue that we should focus (more) on this common ground rather than emphasising our differences.

This is to counteract the common tendency to only discuss what one doesn’t agree on, and thereby lose sight of possible win-wins. In-fighting is a common failure mode of movements, as it is an all too human tendency to allow differences to divide us and throw us into tribal dynamics.

Of course, others have already made the case for cooperation with other value systems (see e.g. 1, 2) and discussed the idea of focusing on improving the long-term future conditional on non-extinction (see e.g. 1 [LW · GW], 2 [EA · GW]). The contribution of this post is to give a (non-exhaustive) overview of priority areas that (almost) all longtermists can agree on.

Improving values

Long-term outcomes are in large part determined by the values that future actors will hold. Therefore, better values prima facie translate into better futures.[3] While there isn’t full agreement on what counts as “better”, and I can’t speak for everyone, I still think that longtermist EAs can largely agree on the following key points:

While one can take these points for granted when immersed in the EA bubble, this is a lot of common ground, considering how uncommon these views are in wider society. Efforts to convince more people to broadly share this outlook seem valuable for all longtermists. (I’m bracketing discussions of the tractability and other questions around moral advocacy - see e.g. here for more details.)

Compromise rather than conflict

The emergence of destructive large-scale conflicts, such as (but not limited to) great power wars [? · GW], is a serious danger from any plausible longtermist perspective. Conflict is a key risk factor for s-risks, but also increases the risk of extinction or civilisational collapse, and would generally lead to worse long-term outcomes.

Longtermists therefore have a shared interest in avoiding severe conflicts, and more broadly in improving our ability to solve coordination problems. We would like to move towards a future that fosters cooperation or compromise between competing actors (whether on the level of individuals, nations, or other entities). If this is successful, it will be possible to achieve win-wins, especially with advanced future technology; for instance, cultured meat would allow us to avoid animal suffering without having to change dietary habits.

Foresight and prudence

Another shared goal of longtermist EAs is that we want careful moral reflection to guide the future to the greatest extent possible. That is, we would like to collectively deliberate (cf. differential intellectual progress) on what human civilisation should do, rather than letting blind economic forces or Darwinian competition rule the day.

In particular, we would like to carefully examine the risks associated with powerful future technologies and to take precautionary measures to prevent any such risks - rather than rushing to develop any feasible technology as fast as possible. A prime example is work on the safety and governance of transformative artificial intelligence. Another example may be technologies that enable (imprudent) space colonisation, which, according to some, could increase extinction risks and s-risks.

To be able to influence the future for the better, we also need to better understand which scenarios are plausible - especially in terms of AI scenarios - and how we can have a lasting and positive impact on the trajectory of human civilisation (see e.g. 1, 2). Longtermists therefore have another common goal in research on cause prioritisation and futurism.

Improving our political system

Another example of common ground is to ensure that our political system is working as well as possible, and to avoid harmful political and social dynamics. This is clearly valuable from both a suffering-focused and an “upside-focused” perspective, although it is not clear how tractable such efforts are. (For more details on possible interventions, see here.)

For instance, a plausible worry is that the harmful individuals and ideologies will become dominant, resulting in a permanent lock-in of a totalitarian power structure. Historical examples of such totalitarian regimes were temporary and localised, but a stable global dictatorship may become possible in the future.

This is particularly worrisome in combination with malevolent personality traits in leaders [EA · GW] (although those can also cause significant harm in non-totalitarian contexts). Efforts to reduce malevolence or prevent a lock-in of a totalitarian regime therefore also seem valuable from many perspectives.

Conclusion

There are significant differences between those who primarily want to reduce suffering and those who primarily want a flourishing future for humanity. Nevertheless, I think there is a lot of common ground in terms of the shared goal of improving the long-term future. While I do not want to discourage thoughtful discussion of the remaining points of disagreement, I think we should be aware of this common ground, and focus on working towards a future that is good from many moral perspectives.


  1. Actual efforts to avert extinction (e.g., preventing nuclear war or biosecurity) may have effects beyond preventing extinction (e.g., they might improve global political stability), which are plausibly also valuable from a suffering-focused perspective. Reducing extinction risk can also be positive even from a purely suffering-focused perspective if we think space will counterfactually be colonised by an alien civilisation with worse values than humans. ↩︎
  2. However, preventing extinction is also a shared interest of many value systems - just not necessarily of (all) suffering-focused views, which is the subject of this post. So I do not mean to imply that efforts to avert extinction are in any way “uncooperative”. (One may also hold a pluralistic or non-consequentialist view that values preserving humanity while still giving foremost priority to suffering reduction.) ↩︎
  3. Of course, values are not the only relevant factor. For instance, the degree of rationality or intelligence of actors and technological / physical / economic constraints also matter. ↩︎

7 comments

Comments sorted by top scores.

comment by MichaelA · 2020-07-30T01:24:59.678Z · score: 7 (2 votes) · EA(p) · GW(p)

I think this post makes important points, and makes them well. If I were to distill my own thoughts on this sort of topic into just two key points, it'd be:

  • People with and without suffering-focused ethics will agree on many aspects of how the long-term future should be. In particular, this is because many existential catastrophes will also be suffering catastrophes, and vice versa. (See also Venn diagrams of existential, global, and suffering catastrophes [EA · GW].)
    • E.g., "a permanent lock-in of a totalitarian power structure" sounds awful to pretty much everyone.
  • People with and without suffering-focused ethics will agree on what to do in the present even more than would be expected from the above point alone. In particular, this is because many actions aimed at changing the long-term future in ways primarily valued by one of those groups of people will also happen to (in expectation) change the long-term future in other ways which the other group values.
    • E.g., improving our values and political system seems like it could both (a) reduce extinction risks and (b) reduce the expected amount of suffering in futures that are overall good from a non-suffering-focused perspective.

(Also, btw, questions and links to resources relevant to many of the topics you mentioned can be found in my recent post Crucial questions for longtermists [EA · GW].)

comment by Tobias_Baumann · 2020-07-30T08:47:20.813Z · score: 3 (2 votes) · EA(p) · GW(p)

Thanks for the comment! I fully agree with your points.

People with and without suffering-focused ethics will agree on what to do in the present even more than would be expected from the above point alone. In particular, this is because many actions aimed at changing the long-term future in ways primarily valued by one of those groups of people will also happen to (in expectation) change the long-term future in other ways, which the other group values.

That's a good point. A key question is how fine-grained our influence over the long-term future is - that is, to what extent are there actions that only benefit specific values? For instance, if we think that there will not be a lock-in or transformative technology soon, it might be that the best lever over the long-term future is to try and nudge society in broadly positive directions, because trying to affect the long-term future is simply too "chaotic" for more specific attempts. (However, overall I think it's unclear if / to what extent that is true.)

comment by sbehmer · 2020-07-30T13:05:52.831Z · score: 6 (4 votes) · EA(p) · GW(p)

Thanks for the post. One question on the background: is there any data (from the EA survey or elsewhere) about the percentage of EAs who lean towards suffering-focused ethics?

comment by Jonas Vollmer · 2020-08-03T08:26:28.276Z · score: 7 (3 votes) · EA(p) · GW(p)

I think David Moss has data on this (can you tag people in EA Forum posts?). I've sent him a PM with a link to this comment as an FYI, though I'm not sure he has time to respond.

comment by MichaelA · 2020-07-30T01:27:30.068Z · score: 3 (2 votes) · EA(p) · GW(p)
focus on working towards a future that is good from many moral perspectives.

I might disagree slightly here, or might frame things differently. I do take moral uncertainty [EA · GW], moral trade, cooperation, etc. quite seriously, and do think that those things push in favour of working towards a future that's good from many moral perspectives. But I think we'd need more detailed analysis to say whether or not a given person, or EA as a whole, should focus on that goal.

It may even be that the best way to cooperate and maximise everyone's values (in expectation) is to take a sort of portfolio approach across different values systems. That is, we might want many people to focus on working towards a future that's excellent from a handful of moral perspectives and either ok or just slightly bad from other moral perspectives, but with this collectively representing a huge range of moral perspectives. This might be better due to specialisation.

E.g., some people might focus primarily on extinction risk reduction and some might focus primarily on fail-safe AI. Perhaps this results in a halving of both sets of risks, and perhaps that seems better to both sets of values than all of those people working on reducing risks of totalitarianism would. (I'm not saying this is the case; I see it merely as a plausible illustrative example.)

Note that this isn't the same as "Just do what seems best to your own values" - it might be that a suffering-focused person works on extinction risk reduction while a non-suffering-focused person works on fail-safe AI, as a sort of moral trade. This arrangement could be best for both of their values if it suits their comparative advantages.

Did you mean "focus on working towards a future that is good from many moral perspectives" to be inclusive of taking that sort of portfolio approach, in which individual people might still focus on doing things that are primarily good based on one (set of) moral perspectives?

comment by Tobias_Baumann · 2020-07-30T08:33:33.174Z · score: 3 (2 votes) · EA(p) · GW(p)

Yeah, I meant it to be inclusive of this "portfolio approach". I agree that specialisation and comparative advantages (and perhaps also sheer motivation) can justify focusing on things that are primarily good based on one (set of) moral perspectives.

comment by MichaelA · 2020-07-30T08:52:37.010Z · score: 5 (3 votes) · EA(p) · GW(p)

In that case, take my comment above as just long-winded agreement!

I think we could probably consider motivation (and thus "fit with one's values") as one component of/factor in comparative advantage, because it will tend to make a person better at something, likely to work harder at it, less likely to burn out, etc. Though motivation could sometimes be outweighed by other components of/factors in comparative advantage (e.g., a person's current skills, credentials, and networks).