Objectives of longtermist policy making

post by Henrik Øberg Myhre, Andreas_Massey, philiphand, Jakob, Sanna Baug Warholm · 2021-02-10T18:26:30.881Z · EA · GW · 7 comments

Contents

  0.0 Introduction
  1.0 Further our understanding of longtermism and adjacent scientific fields
    What does a good society look like?
    How do we create a good society?
  2.0 Shape policy making institutions for future generations
    Develop epistemic capabilities for long-term policy making
    Motivate policymakers to prioritize future generations
    Remove institutional barriers to longtermist policy making
    Proposed mechanisms
      Designated stakeholders
      Information interventions
      Voting mechanisms
      Liability mechanisms
      Reallocation of resources
  3.0 Directly influence the future trajectory of human civilization
    Mitigate catastrophic risk and build resiliency to tail events and unknown unknowns
      Reduce the probability of specific risks
      Improve risk management frameworks
      Increase resilience of critical systems
    Build inclusive progress through long-lasting and well-functioning institutions
      There is still much we don’t know about progress
      General ideas for how to increase progress
      Specific proposals for how to increase inclusive progress
    What about sustainability?
  4.0 Summary
None
7 comments

Estimated reading time: 20-30 minutes

-We would like to thank the following for their excellent feedback and guidance throughout this article, in no particular order: Tyler M. John, Max Stauffer, Aksel Braanen Sterri, Eirik Mofoss, Samuel Hilton, Konrad Seifert, Tildy Stokes, Erik Aunvåg Matsen and Marcel Grewal Sommerfelt.

0.0 Introduction

This article is co-authored by five members of Effective Altruism Norway as a pilot project to test if we can contribute in a valuable way to the emerging field of longtermism and policy making.

In the article we summarize some of the work that is being done in the emerging field of longtermism, using a new structure to classify the different interventions (see Figure 1: Three objectives of longtermist policy making). Then, for each objective we describe related challenges and potential solutions, and give some examples of current ongoing work.

We hope that the new structure can help improve coordination in this emerging field, and enable improved prioritization of interventions. If this structure resonates well with established experts in the field, we are happy to write up a shorter version of this article that could serve as an introduction to longtermist policy making for non-experts. Already, at 17 pages this article is one fourth of the length of the GPI research agenda, which covers many of the same topics. 

Finally, we have emphasized some aspects of longtermist policy making that we believe have been underemphasized in the effective altruism- and longtermism communities in the past. Examples include scenario planning, robust decision making and redteaming among others, which we have described together with forecasting in section 2.1 as essential epistemic capabilities for long-term governance. These tools are complementary to forecasting-based epistemic capabilities that the EA/longtermist communities already promote, and we hope that they will receive increased attention going forward.

We hope to produce 1-3 further articles on similar topics through 2021, and welcome any experts who have capacity to provide feedback on our work.

--------------------------------------------------------------------

In 2019 William MacAskill proposed a definition of the term longtermism [EA · GW] as the view that those who live at future times matter just as much, morally, as those who live today. There are many reasons to believe that actions can have a substantial impact on the future. For instance, the economic growth seen in the past two centuries has lifted billions out of poverty. In addition to this, any long-term consequences of climate change caused by humans could decrease the life quality of several generations to come. Our generation is also one of the first who has had the technological potential to destroy civilization through e.g. nuclear weapons, and thereby eliminating all future of humanity. This means that actions we take today can improve the course of history for hundreds of generations to come.

Interest in the welfare of future generations precedes the MacAskill definition of longtermism from 2017. In 2005 the Future of Humanity Institute was established at Oxford university. In 2009, the Centre for Strategic Futures (CSF) was established by the Singaporian Government as a futures think tank. In 2017 William MacAskill started using the word “longtermism” as a term for the cluster of views that involved concern about ensuring the long-run future goes as well as possible. Since then, many have contributed [? · GW] to the development of the philosophical field. The Global Priorities Institute (GPI) in Oxford was established in 2018 with the mission to conduct and promote world-class, foundational academic research on how most effectively to do good. In 2020 GPI published a new research agenda, where one of its two sections was dedicated to longtermism. These are just some of several milestones in the short history of longtermism. 

If we believe that the future is what matters most and that we can influence it through our policy making, then it follows that the long-run outcomes of enacted policies should be one of the key considerations of the policy making process. However, most political systems are not prioritising long-term planning sufficiently compared to the potential benefits just for existing generations – nevermind thinking about the moral importance of future generations. 

There are examples of different institutions and policy makers that are putting longtermism on the agenda today, but the time frame they consider long-term differs. Time horizons of longtermist organizations that frequently interact with policy makers (e.g. APPG and Alpenglow) are constrained by the norms in the current policy making process. Although academics talking about "longtermism" can look thousands of years ahead, actors seeking to practically influence policy organisations, including ourselves, are typically considering shorter time horizons, e.g. 20-30 years in the future. 

This article will explore three categories of objectives for longtermist policy making and might serve as a guide towards shaping longtermist policy suggestions. These objectives are summarized in figure 1.

Figure 1: Representation of the three objectives longtermist policies should focus on. Objective 1 and 2 serve as foundations for the more direct objective(s) above them.

On top of the pyramid is the objective directly benefiting future generations - i.e. ensuring that there is a future for human civilization, and that it is as positive as possible. This objective builds on the condition that policy making institutions are enabled to develop such policies, which brings us to part two of the pyramid. This part describes three essential conditions to achieve successful behaviour change interventions; capability, motivation and opportunity, reflecting the COM-B system for institutional reform (Michie et. al. 2011). The two upper pieces of the pyramid both rest upon the fundamental part, which concerns the objective of understanding longtermism. Interventions focused on this objective have a more indirect impact mechanism.

A policy intervention should optimize for one or several of these objectives in order to qualify as a "longtermist policy proposal".

Note that the proposals in figure 1 are synergistic - if we improve our performance on one of the objectives, it may become easier to also improve on others. In general, objective one works as an enabler of objective two, and both objective one and two are enablers of the third objective. For instance, if a policy making institution is able to agree on a set of KPIs to measure the long-term quality of a society (as a partial solution to objective 1 in figure 1), then they can set up a forecasting infrastructure for these KPIs (developing capabilities to govern for the long term, as described in objective 2). With this forecasting infrastructure in place, long-term effects of proposed policies will be more visible to the electorate, creating stronger incentives for politicians to optimize for long-term outcomes (solving another part of objective 2; motivations). This will for instance make it easier to prioritize catastrophic risk mitigation (enabling investment in efforts focused on objective 3), etc.

Several of the ideas in each category of objectives would be familiar to experienced effective altruists due to the natural synergies of longtermism and effective altruism. However, even experienced effective altruists may not have encountered all of the topics in this article; examples of topics that the experienced reader may find interesting include:

While the objectives are relevant for policy makers in a broad range of governance models and in countries with different levels of democratic development, the examples in this article are primarily focused on policy making on national levels in industrialized, democratic countries. 

1.0 Further our understanding of longtermism and adjacent scientific fields

In the ongoing field of exploring strategic considerations related to longtermist policy making, there is a need for agreement of the meaning of the word. The bottom piece of the pyramid in figure 1 concerns our understanding of longtermism. William MacAskill proposes [EA · GW]three premises that make up what he calls the minimum definition of longtermism: (1) Those who live at future times matter as much, morally as those who live today, (2) society currently privileges those who live today above those who live in the future, and (3) we should take action to rectify that, and help ensure the long-run future goes well. Based on these premises, MacAskill and others have proposed political measures like future assemblies or a Ministry of the Future (see section 2.4 for further elaboration). Organizations like the Global Priorities Institute (GPI) and the Future of Humanity Institute (FHI) are currently working on establishing longtermism as a scientific field of inquiry. 

1.1 What does a good society look like?

Two important constraints on our current ability to positively influence the future are (i) uncertainty about what a good society looks like, i.e. moral cluelessness, and (ii) how we can best create one, i.e. strategic cluelessness. Different scientific and philosophical fields have attempted to investigate the first question in different ways. One example of moral cluelessness is the repugnant conclusion, which assumes that by adding more people to the world, and proportionally staying above a given average in happiness, one can reach a state of minimal happiness for an infinitely large population. However, we aren't completely clueless: here are some metrics that are commonly used to describe more or less positive aspects of a society. 

Economists frequently use KPIs (Key Performance Indicators) to try to measure different facets of a successful society. GDP and GDP growth is perhaps the most common, while metrics like Gini-coefficients, average lifespan, GHG emissions, or the Human Development Index are used to describe inequality, health, sustainability and economic development, respectively.

While none of these metrics cover all that matters in a society on their own, a combination of such KPIs may capture most of the aspects that we care about. The “Portugal we want” project is an example of a collaborative effort to converge on a set of KPIs to use in governance for the long term. There are also other examples that similarly attempt to stake out the course for the future of the country, e.g. the “Wales we want”-project, or the japanese work on “Future Design”. 

Another, more academically oriented example of projects that attempt to compile partial descriptions of a good society into more complete descriptions, is the GPI research agenda. It lists several other partial approaches to measure broader social welfare through a set of KPIs, including informal discussions by Bostrom and Shulman

1.2 How do we create a good society?

When we want to plan for a good society in the future we need to make prioritizations. This can be very important for the long-run trajectory of society as some efforts to improve society are much more effective than others. Cause prioritization is a philosophical field involved with evaluating and comparing different cause areas in their effectiveness. Some of the organizations working on cause prioritization are 80,000 Hours, the Open Philanthropy Project, and The Center for Reducing Suffering. The latter proposes that starting out with a cause-neutral attitude to longtermist policy making is crucial to succeed at the cause prioritization. To achieve this, effective institutions and organizations need to: 

  1. Build a broad movement for longtermist policy change so that these efforts don’t get stuck in a specific cause area.
  2. Explicitly work on prioritization research so that cause areas can be accurately compared, as well as induce attitude change in political and societal institutions (see the middle piece of the pyramid: shape policy making institutions for future generations).

One important concept in cause prioritization is the notion of crucial considerations - which are strategic questions that can significantly change the optimal strategy when they are taken into consideration. Some of the crucial consideration of longtermist policy making includes, but is not limited to, our evaluation of the hinge of history hypothesis [EA · GW] (HoH), as well as other considerations discussed in the Global Priorities Institute’s new research agenda. The HoH assumes that this century, or perhaps especially the coming decades, is the most influential period in all of human history. Therefore, our evaluation of HoH’s likelihood is one of the determinants of how we should influence policy makers and the way we distribute the resources we have available today. If we believe that the coming century is merely as influential as a typical century, then we - like patient longtermist [EA · GW] - will probably spend less of our philanthropic resources now, and save more to spend them later. However, if we believe that this period is the most “hingey” period of all of human history - e.g. because our current values could be locked in for generations to come (i.e. value lock-in view), or if we are living in a time of perils - then we should rather spend more of our philanthropic resources now to ensure the most impact. These considerations can be applied to our spending of any type of philanthropic capital - either money, political influence or other resources of value. If we don’t live at the HoH, it then seems most logical to spend the next decades focusing on building political influence, rather than spending political capital to influence specific decisions in the near future. 

2.0 Shape policy making institutions for future generations

So far, we have considered the problem of longtermism on a general level, and we will therefore describe in this part different measures and obstacles connected to developing and motivating longtermist policy making in institutions. This section reflects the second piece of the pyramid in figure 1, and further elaborates on the COM-B system to ensure successful interventions in behavioural change. We will first consider epistemic determinants and how we can develop epistemic capabilities like forecasting and scenario planning, as well as redteaming and robust decision making. Then we will look at how we can motivate policy makers to prioritize future generations, and in the last paragraph we will consider important institutional barriers to such policy making, and how to remove them in order to to create opportunities for long-termist policy making. This section is largely a summary of the work by John & MacAskill, so readers who've studied their work can skip it.

2.1 Develop epistemic capabilities for long-term policy making

Lack of knowledge about the future is likely one of the main sources of political short-termism, also known as epistemic determinants in Longtermist Institutional Reform by Tyler John and William MacAskill. These determinants lead to discounting of the value of long-term beneficial policies, making them less likely to be enacted. Some discounting is rational simply because there is a lot of uncertainty about the benefits of long-term policies. Irrational discounting is another source of short-termism which is caused by cognitive biases and attentional asymmetries between the future and nearby past. Vividness effects can make people react more strongly to vivid sources of information like news, videos and graphics compared to scientific research. People are also often over-confident in their ability to control and eliminate risks under situations of uncertainty. See Thinking, fast and slow (2011) by Daniel Kahneman for further details. Although these shortcomings are limiting politicians in their effectiveness, there has also been cast doubt on the possibility of predicting the future at all by philosopher Christian Tarsney.

Politicians work with the limitations of time and influence which can lead to attentional asymmetries, i.e. when determining the effectiveness of policies, they tend to focus too much on recent events, rather than basing it on future projections. The result of this asymmetry can be that politicians work with less accurate predictions. Furthermore, because of these reality constraints (i.e. time and power), politicians are forced to utilize heuristics like planning fallacy, availability bias and the law of small numbers to tackle current and future issues. However, we have also seen that the long-term can be prioritized politically with the Paris Agreement, carbon tax (e.g. in Norway in 1991), or the Danish council on climate change

To deal with these problems, politicians need effective means of forecasting with different sources - e.g. using teams of superforecasters and domain experts, or market-based approaches like prediction markets, to obtain high-quality information about the future.This needs to be implemented to overcome the information barrier (knowledge about the future) and the attention barriers (making changes in future outcomes more salient) so that politicians can make informed decisions about the future. 

To maximize the utility gained from this information, decision makers also need to invest in institutions and organizations that can develop epistemic capabilities beyond forecasting, e.g. scenario planning, robust decision making, and red teaming, among others. In scenario planning exercises, policy makers define a set of scenarios that jointly describe the possible futures that are likely enough to be considered, that differ depending on factors of high uncertainty, and with significant implications for the optimal policy choice. Then, policies are evaluated for how they perform across the range of scenarios. Depending on the risk preferences of the policy makers, they should choose a robust policy that both has a high expected value across scenarios, and fails as gracefully as possible in the worst scenarios. Scenario planning could also be supplemented with robust decision making which especially emphasizes strategies that do well in worst-case scenarios. Additionally, red teaming can provide a solid method of stress-testing the plans we make for the future by taking an adversarial approach. 

Several researchers within the EA movement are working on these issues, e.g. Neil Dullaghan, Michael MacKenzie, and Eva Vivalt. Dullaghan proposes [EA · GW] deliberation as a means of reaching better cooperation across party-lines and long-term thinking. He also claims that there may be a link between deliberation and long-term thinking; specifically in areas like climate change and the environment. Furthermore, MacKenzie argues that deliberation can help us overcome our cognitive biases by for instance appealing to the idea “saving future children'' to ensure longtermist thinking. In order to gather all these findings within forecasting, Vivalt, a researcher at the Australian National University and University of Toronto, proposes [EA · GW] a platform to coordinate the research and the ability of each researcher to forecast. These are only some examples of researchers that are working to improve institutional decision making among many more. Still, it is one of the top recommended career paths by 80000 Hours, as “Improving the quality of decision-making in important institutions could improve our ability to solve almost all other problems”.

2.2 Motivate policymakers to prioritize future generations

Even if there are policymakers who have the necessary capabilities to improve the welfare of future generations, there are still several factors that discourage them from doing so. These factors are referred to as motivational determinants in the Longtermist Institutional Reform by Tyler John and William MacAskill, from which the following three sections are heavily based on.

People tend to have a high time preference for the present, leading to greater discounting of the value of long-term benefits, which makes  policies more short-termist. This is a problem that affects both voters and people in power, although the severity of this problem is unclear.

Self-interest and relational favouritism another source of short-termism, as many people care more about themselves and their relatives than future generations. Self-beneficial policies are generally short-termist as policymakers and their relatives will only live for a short amount of time compared to the potential lifespan of humanity.

Cognitive biases may also affect people’s political decisions, two known biases are the identifiable victim effect and procrastination. The Identifiable victim effect is the tendency to prioritize individuals that are visible over individuals that are statistical or theoretic. As future generations are invisible and haven’t been born yet, this naturally leads short-termism. 

Procrastination drives people to delay difficult problems until they become urgent and demand action. The further a long-term beneficial action is delayed, the less beneficial it is likely to be for future generations. Longtermism is especially prone to procrastination due to its extremely long timeframe.

Politicians are often even more short-termist than these factors would suggest, and they may frequently make extremely short-term decisions that have minimal benefits and significant costs within a few years, due to the various institutional factors discussed below. 

2.3 Remove institutional barriers to longtermist policy making

Even policymakers that have the expertise and motivation to improve the welfare of future generations can be held back by institutional barriers that are preventing them from effectively advocating for longtermist policies. Many of these factors are due to the way today’s governmental institutions are designed, other sources include politicians’ economic dependencies and the media.

Most governments have short election cycles that incentivize short-term policy. Elected representatives naturally want to be re-elected, and one way to gain the favour of potential voters is to provide evidence that their previous time in office brought positive and immediate effects, which is predominantly achieved by initiating short-term policies.

Along with short election cycles, most performance measures mainly evaluate the short-term effects of policies, further discouraging policymakers from advocating for long-term policy.

Time inconsistency is also a problem in governmental institutions because subsequent policymakers can repeal previously enacted future-beneficial policies, as well as redirect investments that were originally intended for future generations. Most governments lack strong institutions dedicated to protecting the interests of future generations, which could help combat the problem of time inconsistency.

The media, which is largely focused on today’s current events, demand immediate reactions from policymakers. This pressures the policymakers to focus on short-term issues in order to build their reputation, as abstaining from doing so might lower their odds of re-election.

2.4 Proposed mechanisms

To deal with the problems mentioned above (lacking capabilities, disincentivized policymakers and institutional barriers), there is a dire need for institutional reform. There are many different ways to go about this, and there is still a lot of uncertainty about what might be the best solutions. What follows is a list of various longtermist policy proposals chosen with help from Tyler John. The proposals are divided into five main categories, with examples below. A more comprehensive list can be found here [EA · GW].

Designated stakeholders

Key decision-makers or their advisors are appointed as responsible for protecting the interests of future people. Some examples of these are:

Information interventions

Affects how information about the impact of future policies is gained or made publicly available. Some examples of these are:

Voting mechanisms

Democratic election mechanisms and policy voting rules are redesigned to promote candidates that are expected to benefit future people. Some examples of these are:

Liability mechanisms

Mechanisms that hold current decision-makers liable if their decisions lead to poor outcomes in the future, including formal rights for future people. Some examples of these are:

Reallocation of resources

Control of current resources is deferred to future people. Some examples of these are:

For more in-depth analysis of the various proposals, see “Longtermist Institutional Design Literature Review” by Tyler John.’

In addition to the five categories above, another way to encourage long-term policy could be to influence society to be more long-term friendly. An example of this is Roman Krznaric’s writings where he establishes terms and concepts that could enable more longtermist thinking. 

3.0 Directly influence the future trajectory of human civilization

The top layer of the pyramid in figure 1 considers how one can influence the future of humanity in a more direct way than the objectives in layer 1 and 2 does. There are several methods to directly improve the future and positively shift the trajectory of civilization. One approach is to avoid the bad scenarios (as exemplified by the red scenarios in Figure 2), such as extinction and major catastrophes. Another approach is to boost the good scenarios (exemplified by the green scenarios in Figure 2) by increasing the rate of inclusive progress - either by increasing economic growth, by making progress more inclusive, or by increasing our ability to convert economic wealth into wellbeing. 

Figure 2: Illustration of positive and negative trajectories of civilization.

3.1 Mitigate catastrophic risk and build resiliency to tail events and unknown unknowns

In the effective altruism movement, one commonly recognized way to positively influence the future is to make sure that it actually exists and avoid scenarios of extreme suffering, i.e. by avoiding existential risks. By developing longtermist policy and institutions, we can better prepare for the future by building resiliency to both known and unknown existential risks.

Figure 3: Examples of risks based on a figure by Nick Bostrom

Let us start with some definitions. Bostrom explains the difference between existential risk and catastrophic risk in Existential Risk Prevention as Global Priority. Existential risks are both pan-generational and crushing, which means that they drastically reduce the quality of life or cause death that humanity cannot recover from. Compared to this, risks that are merely globally catastrophic do not individually threaten the survival of humanity. Assuming that existence is preferable to non-existence, existential risks are considered significantly worse than global catastrophic risks because they affect all future generations. 

However, global catastrophes may drastically weaken critical systems and our ability to tackle a second catastrophe. This argument is presented by the Global Catastrophic Risk Institute in a paper about double catastrophes with a case study on how geoengineering may be severely affected by other catastrophes. Moreover, many of the practices that can help us avoid globally catastrophic risks are also useful to prevent existential risks. We have titled this section “mitigate catastrophic risk” to ensure that we cover as many of the risks that may significantly impact the long-term future of humanity as possible.

The list of already known existential risks includes both natural and anthropological risks. Today’s technological advancements have created more anthropological risks, and there are good reasons to believe that they will continue to do so. Bostrom argues in The Fragile World Hypothesis that continuous technological development will increase systemic fragility, which can be a source of catastrophic or existential risk. In the Precipice, Toby Ord estimates the chances of existential catastrophe within the next 100 years at one in six. We have already been dangerously close to global catastrophe, e.g. when Stanislav Petrov potentially singlehandedly avoided a global nuclear war in 1983 when he did not launch missiles in response to the warning system reporting a US missile launch. To prevent such close calls from happening in the future, we need to gain knowledge about both known and unknown risks and solutions to them. 

In the Precipice, Ord proposes that reaching existential security is the first of three steps to optimize the future of human civilization. Reaching existential security includes both eliminating immediate dangers, potential future risks, and establishing long-lasting safeguards. For example, switching to renewable energy sources, electric or hydrogen-based fuel, and clean meat, are ways to safeguard against catastrophic climate change. This is one risk that 80,000 Hours include in their view of the world’s most pressing problems. 80,000 Hours’ list also includes positively shaping the development of artificial intelligence. This can be positively influenced by investing in technical research and improving governmental strategy. Another priority area is reaching nuclear security, which includes shrinking nuclear stockpiles and improving systems and communication to avoid depending on people acting like Petrov in the case of false warnings. Another priority catastrophic risk area in the EA movement is biorisk and pandemic preparedness, which is one of the focus areas of the Open Philanthropy Project. In addition to protecting against already known risks, humanity should research potential future risks and use forecasting principles to prepare for them. 

When we have reached existential security, Ord proposes that the next steps should be 

  1. a long reflection where we determine what kind of future we want to create and how to do so, and
  2. achieving our full potential.

Thus, Ord argues that existential security should take priority over other objectives described in this article, as it is more urgent.

There are a wide range of actions that can be taken to mitigate catastrophic and existential risks. As mentioned, these actions mainly include eliminating immediate dangers and establishing long-lasting safeguards. The lists below are partially based on the work by Global Catastrophic Risk Policy

Reduce the probability of specific risks

The most direct course of action to avoid catastrophe is to reduce the probability of catastrophic or existential risks. Some suggestions to risks and how to reduce them are: 

Improve risk management frameworks

Another approach is to improve risk management frameworks in such a way that we are prepared and able to react better to future risks. Some examples are: 

Increase resilience of critical systems

We can also limit the potential harm done by catastrophic risks or mitigate risks by increasing the resilience of critical systems. Some examples of how to increase critical system resilience are: 

3.2 Build inclusive progress through long-lasting and well-functioning institutions

Another approach to positively shift the trajectory of civilization is to increase the rate of progress, and make progress more inclusive. Continuous progress can improve human life quality and create a flourishing future for people of diverse backgrounds. Collison and Cohen define progress as economic, technological, scientific, cultural or organizational advancements that transform our lives and raise our living standard. This definition is broader than the typical economic definition focused on measuring GDP growth as a proxy or progress. In particular, it includes the opportunity to increase progress by increasing our ability to convert economic wealth into wellbeing. For this reason, we will use the term “economic progress” when referring to GDP growth, while “progress” alone will refer to the broader definition. Moreover, “wellbeing”, “welfare” and “happiness” are used interchangeably, and it is assumed that this is closer to a true measure of progress (in the broader sense) than purely economic metrics.

There is still much we don’t know about progress

There is an ongoing debate about whether there are fundamental limits to economic progress (and indeed if there are upper limits of progress overall) - if, at some point in the future, GDP growth must slow down and approach zero. If there are limits to economic progress, then increasing the rate of economic progress will only speed up the arrival of a zero-growth world of abundance. This could severely limit the potential value of increasing the rate of economic progress.

If there is no immediate limit to economic progress, there are good reasons to believe that it could continue indefinitely, and improve human welfare in the process. Human quality of life has generally improved significantly since the Industrial Revolution. This strong correlation between GDP growth and improved life quality has been well documented by e.g. Gapminder. For example,  the percentage of people living in extreme poverty has decreased from about 90% in 1820 to 10% in 2015. It is also argued that a stagnation in growth is risky in regards to existential risks. GDP growth is far from the only factor that influences progress. Other examples include improved economic distribution, sustainable development and effective transforming of economic growth to human welfare. 

There are also ongoing discussions about how to best measure (a broader definition of) progress, if progress is slowing down or accelerating, and how existential risk is affected by the rate of economic progress. This is briefly covered in the GPI research agenda, and somewhat more extensively in sources therein.

To improve our understanding of how progress occurs, Collision and Cowen have proposed to develop “Progress Studies” as a field of research. According to Collision and Cowen, progress studies investigates successful institutions, people, organizations and cultures to find common factors that are linked to progress. If we succeed in finding common factors between Ancient Greece, The Industrial Revolution and Silicon Valley, we can improve progress by acting accordingly. Due to the immaturity of progress studies, we have yet to find such common factors. However, scientific reform and interventions as described above are seemingly very promising. 

General ideas for how to increase progress

There are three main paths to increasing inclusive progress: increasing economic growth, making progress more inclusive, and converting economic wealth into welfare. The first path has been promoted by e.g. Tyler Cowen, arguing that it is among the most powerful tools to improve the future because economic growth compounds over time.

Making progress more inclusive by redistributing resources or social status can increase total human happiness. According to 80,000 Hours, happiness increases logarithmically when one becomes wealthier, which means that it is a lot more cost-effective to increase the wealth of poor people. Therefore, redistribution of progress is also very important toward effectively and positively shifting the trajectory of humanity. 

While there is a strong correlation between economic wealth and wellbeing, it is not all that matters. Some countries have higher levels of happiness than others, despite being poorer - for instance, self-reported happiness levels in Costa Rica are higher than in Luxembourg, while GDP is 6x lower. It is plausible that we can find ways to make happiness cheaper, so that a similar level of economic wealth can be translated into more welfare.

It is hard to know the counterfactual impact of interventions focused on any of these paths. While catastrophic risk mitigation is focused on changing the outcomes of forks in the path of civilization, interventions for progress to a larger degree rely on shifting long-term trends that are hard to reason about empirically. So far, hypotheses for effective interventions have been generated through the use of some heuristics, including:

Specific proposals for how to increase inclusive progress

It is commonly argued that the scientific revolution has been one of the key drivers of progress in the last centuries, but today many scholars criticize the modern academic institutions for being sub-optimal. For this reason, interventions aiming to improve academic research may be one promising category to increase the rate of progress. Some examples among many interventions aiming to improve academic research include Replication Markets, ArXiv, Semantic Scholar and Ought. Replication Markets use forecasting to estimate a research claims chance of replication. ArXiv and Semantic Scholar are archives with scientific papers, and Ought tries to figure out which questions humans can delegate to artificial intelligence. Additionally, “scientific research” is one of the top cause areas of the Open Philanthropy Project.

All of the abovementioned interventions are improving academic progress, but there are also non-academic interventions that may increase progress. Some examples from the US Policy focus area of Open Philanthropy Project (Open Phil) include:

3.3 What about sustainability?

Outside of the effective altruism movement, sustainability is one of the most common cause areas for people concerned about the welfare of future generations. Significant resources are invested in ensuring that our GHG emissions are brought down, that our depletion of natural resources and destruction of species habitats are slowed, and that state budgets are fiscally balanced across generations. Thus it may seem strange that sustainability has played such a small role in this article.

Our argument, borrowed from Bostrom and others in the EA movement, is that unsustainabilities are bad if they exacerbate catastrophic risk, or if they slow down the rate of inclusive progress. Research by the McKinsey Global Institute shows that unmitigated climate change can be harmful in both of these ways. Further research by the McKinsey Global Institute demonstrates that the social contract is eroding across developed economies, and that economic outcomes for individuals are worsening as a consequence. In cases like these where the unsustainabilities are expected to create large amounts of human suffering, we should work hard to become more sustainable.

4.0 Summary

There are several objectives of longtermist policy making. We have presented three categories of objectives, where the objectives in the bottom layers are potential enablers of the upper objectives. All of them are relevant to the necessary prioritization of future generations, given that longtermism is plausible. 

Each of the objectives and their sub-objectives are well covered in existing literature, but to our knowledge they have not been presented in this structure before. In this article we have summarized some of the relevant parts of the literature, in the hope of providing an accessible introduction to the field. Furthermore, we hope that some points in this article can serve as coordination points for more experienced longtermists - e.g. when referring to which parts of longtermist policy making they are attempting to improve, and why.

7 comments

Comments sorted by top scores.

comment by kbog · 2021-02-14T05:15:27.605Z · EA(p) · GW(p)

I'm skeptical of this framework because in reality part 2 seems optional - we don't need to reshape the political system to be more longtermist in order to make progress. For instance, those Open Phil recommendations like land use reform can be promoted thru conventional forms of lobbying and coalition building.

In fact, a vibrant and policy-engaged EA community that focuses on understandable short and medium term problems can itself become a fairly effective long-run institution, thus reducing the needs in part 1.

Additionally, while substantively defining a good society for the future may be difficult, we also have the option of defining it procedurally. The simplest example is that we can promote things like democracy or other mechanisms which tend to produce good outcomes. Or we can increase levels of compassion and rationality so that the architects of future societies will act better. This is sort of what you describe in part 2, but I'd emphasize that we can make political institutions which are generically better rather than specifically making them more longtermist.

This is not to say that anything in this post is a bad idea, just that there are more options for meeting longtermist goals.

Replies from: Andreas_Massey
comment by Andreas_Massey · 2021-02-16T12:38:28.508Z · EA(p) · GW(p)

Thank you for your feedback kbog.

First, we certainly agree that there are other options that have a limited influence on the future, however, for this article we wanted to only cover areas with a potential for outsized impact on the future. That is the reason we have confined ourselves to so few categories. 

Second, there may be categories of interventions that are not addressed in our framework that are as important for improving the future as the interventions we list. If so, we welcome discussion on this topic, and hope that the framework can encourage productive discussion to identify such “intervention X”’s. 

Third, I'm a bit confused about how we would focus on “processes that produce good outcomes” without first defining what we mean with good outcomes, and how to measure them?

Fourth, your point on taking the “individual more in focus” by emphasizing rationality and altruism improvement is a great suggestion. Admittedly, this may indeed be a potential lever to improve the future that we haven't sufficiently covered in our post as we were mostly concerned with improving institutions. 

Lastly, as for improving political institutions more broadly, see our part on progress.

Replies from: kbog
comment by kbog · 2021-02-21T03:51:58.089Z · EA(p) · GW(p)

I think it's really not clear that reforming institutions to be more longtermist has an outsized long run impact compared to many other axes of institutional reform.

We know what constitutes good outcomes in the short run, so if we can design institutions to produce better short run outcomes, that will be beneficial in the long run insofar as those institutions endure into the long run. Institutional changes are inherently long-run.

Replies from: Andreas_Massey
comment by Andreas_Massey · 2021-03-02T12:30:39.679Z · EA(p) · GW(p)

The part of the article that you are referring to is in part inspired by John and MacAskills paper “longtermist institutional reform”, where they propose reforms that are built to tackle political short-termism. The case for this relies on two assumptions:

1.    Long term consequences have an outsized moral importance, despite the uncertainty of long-term effects.
2.    Because of this, political decision making should be designed to optimize for longterm outcomes. 

Greaves and MacAskill have written a paper arguing for assumption 1: "Because of the vast number of expected people in the future, it is quite plausible that for options that are appropriately chosen from a sufficiently large choice set, effects on the very long future dominate ex ante evaluations, even after taking into account the fact that further-future effects tend to be the most uncertain…“. We seem to agree on this assumption, but disagree on assumption 2. If I understand your argument against assumption 2, it assumes that there are no tradeoffs between optimizing for short-run outcomes and long-run outcomes. This assumption seems clearly false to us, and is implied to be false in “Longtermist institutional reform”. Consider fiscal policies for example: In the short run it could be beneficial to take all the savings in pension funds and spend them to boost the economy, but in the long run this is predictably harmful because many people will not afford to retire.

Replies from: kbog
comment by kbog · 2021-03-04T00:49:47.045Z · EA(p) · GW(p)

No I agree on 2!  I'm just saying even from a longtermist perspective, it may not be as important and tractable as improving institutions in orthogonal ways.

comment by Flodorner · 2021-02-12T16:59:58.944Z · EA(p) · GW(p)

Interesting writeup!

Depending on your intended audience, it might make sense to add more details for some of the proposals. For example, why is scenario planning a good idea compared to other methods of decision making? Is there a compelling story, or strong empirical evidence for its efficacy? 

Some small nitpicks: 

There seems to be a mistake here: 

"Bostrom argues in The Fragile World Hypothesis that continuous technological development will increase systemic fragility, which can be a source of catastrophic or existential risk. In the Precipice, he estimates the chances of existential catastrophe within the next 100 years at one in six."

I also find this passage a bit odd: 

"One example of moral cluelessness is the repugnant conclusion, which assumes that by adding more people to the world, and proportionally staying above a given average in happiness, one can reach a state of minimal happiness for an infinitely large population."

The repugnant conclusion might motivate someone to think about cluelessness, but it does not really seem to be an example of cluelessness (the question whether we should accept it might or might not be). 

Replies from: Andreas_Massey
comment by Andreas_Massey · 2021-02-16T14:14:53.299Z · EA(p) · GW(p)

Thank you for your feedback, Flodorner! 

First, we certainly agree that a more detailed description could be productive for some of the topics in this piece, including your example on scenario planning and other decision making methods. At more than 6000 words this is already a long piece, so we were aiming to limit the level of detail to what we felt was necessary to explain the proposed framework, without necessarily justifying all nuances. Depending on what the community believes is most useful, we are happy to write follow-up pieces with either a higher level of detail for a selected few topics of particular interest (for a more technical discussion on e.g. decision making methods), or a summary piece covering all topics with a lower level of detail (to explain the same framework to non-experts). 

As for your second issue you are completely correct, it has been corrected. 

Regarding your last point, we also agree that the repugnant conclusion is not an example of cluelessness in itself. However, the lack of consensus about how to solve the repugnant conclusion is one example of how we still have things to figure out in terms of population ethics (i. e. are morally clueless in this area).