Common ground for longtermistspost by Tobias_Baumann · 2020-07-29T10:26:50.727Z · EA · GW · 8 comments
Improving values Compromise rather than conflict Foresight and prudence Improving our political system Conclusion None 8 comments
(Crossposted from the Center for Reducing Suffering.)
Many in the longtermist [EA · GW] effective altruism community focus on achieving a flourishing future for humanity by reducing the risk of early extinction or civilisational collapse (existential risks). Others, inspired by suffering-focused ethics, prioritise the prevention of worst-case outcomes with astronomical amounts of suffering. This has sometimes led to tensions between effective altruists, especially over the question of how valuable pure extinction risk reduction is.
Despite these differences, longtermist EAs still have a lot in common. This common ground lies in our shared interest in improving the long-term future (assuming civilisation does not go extinct). In this post, I'll argue that we should focus (more) on this common ground rather than emphasising our differences.
This is to counteract the common tendency to only discuss what one doesn’t agree on, and thereby lose sight of possible win-wins. In-fighting is a common failure mode of movements, as it is an all too human tendency to allow differences to divide us and throw us into tribal dynamics.
Of course, others have already made the case for cooperation with other value systems (see e.g. 1, 2) and discussed the idea of focusing on improving the long-term future conditional on non-extinction (see e.g. 1 [LW · GW], 2 [EA · GW]). The contribution of this post is to give a (non-exhaustive) overview of priority areas that (almost) all longtermists can agree on.
Long-term outcomes are in large part determined by the values that future actors will hold. Therefore, better values prima facie translate into better futures. While there isn’t full agreement on what counts as “better”, and I can’t speak for everyone, I still think that longtermist EAs can largely agree on the following key points:
- We should strive to be impartially altruistic.
- The well-being of all sentient beings matters. This includes non-human animals and possibly even nonbiological beings (although there is disagreement about whether such entities will be sentient).
- We should consider how our actions impact not just those existing now, but also those existing in the future. In particular, many would endorse the moral view that future individuals matter just as much as present ones.
While one can take these points for granted when immersed in the EA bubble, this is a lot of common ground, considering how uncommon these views are in wider society. Efforts to convince more people to broadly share this outlook seem valuable for all longtermists. (I’m bracketing discussions of the tractability and other questions around moral advocacy - see e.g. here for more details.)
Compromise rather than conflict
The emergence of destructive large-scale conflicts, such as (but not limited to) great power wars [? · GW], is a serious danger from any plausible longtermist perspective. Conflict is a key risk factor for s-risks, but also increases the risk of extinction or civilisational collapse, and would generally lead to worse long-term outcomes.
Longtermists therefore have a shared interest in avoiding severe conflicts, and more broadly in improving our ability to solve coordination problems. We would like to move towards a future that fosters cooperation or compromise between competing actors (whether on the level of individuals, nations, or other entities). If this is successful, it will be possible to achieve win-wins, especially with advanced future technology; for instance, cultured meat would allow us to avoid animal suffering without having to change dietary habits.
Foresight and prudence
Another shared goal of longtermist EAs is that we want careful moral reflection to guide the future to the greatest extent possible. That is, we would like to collectively deliberate (cf. differential intellectual progress) on what human civilisation should do, rather than letting blind economic forces or Darwinian competition rule the day.
In particular, we would like to carefully examine the risks associated with powerful future technologies and to take precautionary measures to prevent any such risks - rather than rushing to develop any feasible technology as fast as possible. A prime example is work on the safety and governance of transformative artificial intelligence. Another example may be technologies that enable (imprudent) space colonisation, which, according to some, could increase extinction risks and s-risks.
To be able to influence the future for the better, we also need to better understand which scenarios are plausible - especially in terms of AI scenarios - and how we can have a lasting and positive impact on the trajectory of human civilisation (see e.g. 1, 2). Longtermists therefore have another common goal in research on cause prioritisation and futurism.
Improving our political system
Another example of common ground is to ensure that our political system is working as well as possible, and to avoid harmful political and social dynamics. This is clearly valuable from both a suffering-focused and an “upside-focused” perspective, although it is not clear how tractable such efforts are. (For more details on possible interventions, see here.)
For instance, a plausible worry is that the harmful individuals and ideologies will become dominant, resulting in a permanent lock-in of a totalitarian power structure. Historical examples of such totalitarian regimes were temporary and localised, but a stable global dictatorship may become possible in the future.
This is particularly worrisome in combination with malevolent personality traits in leaders [EA · GW] (although those can also cause significant harm in non-totalitarian contexts). Efforts to reduce malevolence or prevent a lock-in of a totalitarian regime therefore also seem valuable from many perspectives.
There are significant differences between those who primarily want to reduce suffering and those who primarily want a flourishing future for humanity. Nevertheless, I think there is a lot of common ground in terms of the shared goal of improving the long-term future. While I do not want to discourage thoughtful discussion of the remaining points of disagreement, I think we should be aware of this common ground, and focus on working towards a future that is good from many moral perspectives.
- Actual efforts to avert extinction (e.g., preventing nuclear war or biosecurity) may have effects beyond preventing extinction (e.g., they might improve global political stability), which are plausibly also valuable from a suffering-focused perspective. Reducing extinction risk can also be positive even from a purely suffering-focused perspective if we think space will counterfactually be colonised by an alien civilisation with worse values than humans. ↩︎
- However, preventing extinction is also a shared interest of many value systems - just not necessarily of (all) suffering-focused views, which is the subject of this post. So I do not mean to imply that efforts to avert extinction are in any way “uncooperative”. (One may also hold a pluralistic or non-consequentialist view that values preserving humanity while still giving foremost priority to suffering reduction.) ↩︎
- Of course, values are not the only relevant factor. For instance, the degree of rationality or intelligence of actors and technological / physical / economic constraints also matter. ↩︎
Comments sorted by top scores.