The Long-Term Future: An Attitude Survey

post by Aron_Vallinder · 2019-09-16T15:57:15.662Z · score: 48 (23 votes) · EA · GW · 14 comments


  Survey Text
    First Passage
    Second Passage
  Main Findings
  Correlations and Comparisons

As part of my research for Will MacAskill’s planned book [EA · GW] on longtermism, I carried out an attitude survey to find out how people react to some related arguments. Participants were asked to read two passages of text and indicate their level of agreement with various statements after each passage. The first passage argues that future generations matters just as much as the present, that they are currently neglected, and that there are things we can do to help them. The second, more speculative passage argues that a bigger population is better than a smaller one (all else equal), and that in the long-run, we should spread beyond the solar system. We also collected some demographic data. In total, we recruited 403 college-educated US respondents via Positly. If you want to have a look for yourself, the full data set is available here.

Survey Text

Below are the two passages that survey respondents read.

First Passage

If all goes well, humanity has a long and flourishing future ahead. But whether it does depends in part on the decisions we make today. Distant future generations are no less important than the current generation. Their joys and sorrows are just as real as ours. As an analogy, consider people who live on the other side of the planet. Surely, they don’t matter any less than people who are closer to you in space. Their well-being counts for just as much. In the same way, future generations don’t matter any less simply because they are far away in time.

Yet we rarely pause to think about just how many people there will be in the future. At the moment, there are around seven billion people in the world. The Earth will remain habitable for another 500 million years. If there are another seven billion people for each century until Earth becomes uninhabitable, then the future will contain five million times as many people as are alive today. Because there will be so many people in the future, anything we could do today to improve their lives would be of tremendous importance. The stakes are simply astronomical.

However, future generations are neglected in today’s society. In part, this is due to the short-term incentives we face. For example, politicians get rewarded or punished based on how they perform over the course of an election cycle. As a result, they don’t have much reason to think carefully about how the decisions they make today will affect future generations in centuries to come. Because future generations are so neglected, we should aspire to create a society that does more to help them.

You might think that the future is just so inherently difficult to predict that we can’t really know how to benefit the future. But there are in fact many things we can do to help future generations. For example, we can implement policies that increase the rate of sustainable growth. Economic growth has been one of the main forces behind the increase in quality of life that we’ve seen over the course of history. We’re 50 times richer today than we were prior to the Industrial Revolution. That wealth means we have to work fewer hours, have longer, healthier lives, and are able to engage in a much wider range of leisure activities. Further economic growth may bring comparable benefits to future generations.

Secondly, we can set up institutions for the political representation of future generations. The policies we adopt today are rarely evaluated for their long-term consequences, even though such consequences can often be very significant. By having, for example, an official representative for future generations, we can make sure that these long-term consequences are properly accounted for, so that we don’t choose policies that negatively affect future generations.

Thirdly, we can help future generations by taking action to reduce the risk of human extinction. If humanity goes extinct, our potential for a great future will be lost forever. Today, that future is threatened by climate change and the risk of nuclear war. Moreover, some technological developments that are just around the corner, such as biotechnology, may also bring risks of extinction. Therefore, anything we do to reduce these and other risks will greatly benefit future generations.

Second Passage

If we play our cards right, we can create a wonderful future. Through further technological development, we can create even larger improvements in quality of life than we’ve seen over the past few centuries. Through further medical advances, we can eliminate the many illnesses that plague us today, including cancer and cardiovascular disease. But it is not only through technological and scientific advances that we can create a better future. Through political changes, we can create a much juster world. Although we may not create a utopia, we should expect that quality of life is much higher in the future than it is today.

Because lives in the future could be so wonderful, we should create as many of them as we can, without sacrificing quality of life for those who are already alive. When life is good, being born is a tremendous benefit. In addition to the benefits to the individuals being born, a greater population also means greater opportunity for scientific discoveries, technological advances, cultural expression and many other things we value.

In the very long-run, this means that we must eventually spread beyond the solar system. In principle, there is no reason why we shouldn’t be able to spread to other solar systems. In our galaxy alone, there may be as many as 40 billion habitable planets. These are planets that could support communities of flourishing humans. Given the astronomical stakes, and to ensure that humanity reaches its full potential, it is therefore morally imperative that we ensure that civilization survives long enough that we can spread through the galaxy.

Main Findings

Respondents were asked to indicate their level of agreement, from 1 = “Strongly disagree” to 7 = “Strongly agree”.

1. To what extent do you agree with the argument in the first text?
(M = 5.50, SD = 1.22)

2. People on the other side of the planet matter just as much as those who are near to you.
(M = 6.10, SD = 1.26)

3. People in the distant future matter just as much as those alive today.
(M = 5.79, SD = 1.35)

4. Humanity will still exist in a thousand years.
(M = 5.36, SD = 1.42)

5. Humanity will still exist in a million years.
(M = 4.19, SD = 1.59)

6. We should do more to help future generations.
(M = 6.03, SD = 1.05)

7. I would be willing to accept 5 percentage point higher taxes that will be used to benefit future generations, but which won't at all benefit the current generation.
(M = 4.25, SD = 1.86)

8. There are meaningful ways of affecting things a thousand years from now.
(M = 5.44, SD = 1.41)

9. There are meaningful ways of affecting things a million years from now.
(M = 4.34, SD = 1.79)

10. To what extent do you agree with the argument in the second text?
(M = 4.28, SD = 1.77)

11. For the average person, life will be better in a thousand years than it is today.
(M = 4.49, SD = 1.39)

12. For the average person, life will be better in a million years than it is today.
(M = 3.97, SD = 1.35)

13. Consider two civilizations. Both of them last for a million years. In the first civilization, there are ten billion people in every generation. In the second civilization, there is one billion people in every generation. Other than their population size, the two civilizations are identical. Their members are equally happy, and there are no issues with resource depletion, environmental degradation, or overpopulation.

If only one civilization could come into existence, which would you prefer?
(M = 3.56, SD = 1.81, where 1 = 'Strongly prefer 1B civilization' and 7 = 'Strongly prefer 10B civilization')

For this question, I also looked at the qualitative answers of those who expressed a strong preference for either the 1B or the 10B civilization. I’ve tried to categorize their reasons below (noting that some respondents gave more than one reason):

There were 74 respondents who strongly preferred the 1B civilization.

  1. More resources (25)
  2. Less crowded (22)
  3. Overpopulation (10)
  4. Environmental issues (8)
  5. Humans are overrated (5)
  6. Other (12)

There were 34 respondents who strongly preferred the 10B civilization.

  1. More happy people (23)
  2. More ideas & scientific/technological advances (10)
  3. More diversity/creativity/culture (4)
  4. Increases the chance of human survival and an even better future (3)
  5. Other (2)

14. I hope that in the future, humanity will spread to other solar systems.
(M = 4.71, SD = 1.77)

I also looked at qualitative answers to this question.

75 respondents were strongly in favour of space settlement, and gave the following reasons:

  1. Awesome/Amazing/Unlock potential/Discovery: 35
  2. Needed for survival: 28
  3. More room to address overcrowding/limited resources: 14
  4. Find out if there’s intelligent life: 3
  5. More people get to exist: 1

24 respondents were strongly against space settlement, and gave the following reasons:

  1. This is unlikely/impossible/we won’t survive that long: 11
  2. Humanity is a disaster for other solar systems: 8
  3. Against colonization: 2
  4. Man is stupid: 1
  5. I don’t want to leave Earth: 1
  6. Would just be more of the same problems: 1
  7. Humans don’t belong on other planets: 1

Correlations and Comparisons


In terms of making a convincing case for longtermism, what do these findings imply? Here are some tentative takeaways, though no doubt there are further lessons.

  1. Valuing the future. One striking thing is how strongly people agree that future generations matter just as much as the present one (M = 5.79, SD = 1.35), and that we should do more to help them (M = 6.03, SD = 1.05). Of course, when helping future generations is presented as involving a personal cost (in the form of a tax increase) there is less agreement (M = 4.25, SD = 1.86), so it’s not clear how these attitudes would translate into action. Nevertheless, it does suggest that people generally view some of the core ideas of longtermism in a favorable light.
  2. Population ethics. Another striking thing is how little people think that a larger population is better (M = 3.56, SD = 1.81, where 1 = ‘Strongly prefer 1B civilization’ and 7 = ‘Strongly prefer 10B civilization). However, we also collected qualitative responses to this question, and found that many of the people who preferred the smaller civilization over the bigger were unwilling to accept the stipulations. Among the 74 respondents who strongly preferred the smaller civilization, the most commonly given reasons were more resources (25), less crowded (22), overpopulation (10), and environmental issues (8), in spite of the explicit claim that “there are no issues with resource depletion, environmental degradation, or overpopulation.” Nevertheless, 125 of the 403 respondents reported being indifferent between the two civilizations, so an unwillingness to accept the stipulation cannot be everything that’s going on here.
  3. Climate change. One striking, but perhaps not very surprising finding is just how strongly people associate talk of influencing and benefiting the future with climate change and sustainability. For example, when motivating their answers to the question about whether there are meaningful things we can do to affect things in a thousand years, over half of the respondents mentioned climate change and environmental issues. This suggests that a crucial aspect of communicating longtermism concerns how to position the view with respect to climate change.
  4. Weaker beliefs about the distant future. Many of the statements that concerned a million years into the future received responses that peaked around 4 (‘Neither agree nor disagree’). One hypothesis is that for the very distant future, people typically don’t have beliefs in any strong sense. Perhaps relatedly, as we expected, there was stronger agreement with the first text than the second (M = 5.50 vs. M = 4.28). This suggests that communicating the ‘weirder’ aspects of longtermism presents more of a challenge.

(Thanks to Will MacAskill and Lucius Caviola for discussion.)


Comments sorted by top scores.

comment by Larks · 2019-09-17T01:56:11.034Z · score: 44 (16 votes) · EA · GW

Thanks for doing this work, and making it public. Similar to Max, I basically believe in the Total View, and am sympathetic to Temporal Cosmopolitanism, so consider this somewhat good news.

However, I am a little skeptical about some of the questions. To the extent you are trying to get at what people 'really' think (if they have real views on such a topic...) I worry that some of questions were phrased in a somewhat biased matter - particularly the ones asking for agreement with the text.

When doing political polling, people generally don't ask questions like this:

Do you agree the government should spend more on law and order?

... because people's level of agreement will be exaggerated. Instead, it's often considered better practice to say phrase it more like:

Which Statement do you agree with more?
1) The government should spend more on law and order, even if it means higher taxes.
2) The government should lower taxes, even if it means less spending on law and order.
comment by David_Moss · 2019-09-17T07:51:16.310Z · score: 36 (13 votes) · EA · GW

Agreed. As I mentioned in this comment [EA · GW], people will tend to be inclined to agree with any generally positive sounding platitude, due to acquiescence bias and plausibly social desirability bias. On the whole, I would expect people to be extremely reluctant to explicitly deny that some people "matter just as much as" as others if the affirmative is put to them. This all may especially be a problem when the issues in question are ones people haven't really thought about before and so don't have clear attitudes- this will be particularly likely to elicit just superficial agreement.

I think one of the best approaches to ameliorate this is to use reversed statements i.e. ask people whether they agree with an item expressing the opposite attitude (i.e. that people who are alive here and now matter more). Sanjay should be posting a report of the results when we did this fairly soon. Quite often you will find that people will agree with statements expressing both an attitude and a statement designed to capture the exact opposite view, and you then need to work to find a set of items that together actually seems to meaningfully capture the attitude of interest.

comment by Aron_Vallinder · 2019-09-18T05:21:46.727Z · score: 5 (4 votes) · EA · GW

Thanks for the suggestion to use reversed statements. As I said in my response to Larks, I share this concern, so if we run further iterations of the survey, I'll include something along these lines.

I look forward to seeing Sanjay's report!

comment by Aron_Vallinder · 2019-09-18T05:19:05.493Z · score: 3 (3 votes) · EA · GW

Thanks for this. I basically share the concern that you and David express, and it would be good to revise the statements accordingly if we run further versions of the survey. But even if the extent of agreement is inflated, it seems reasonable to think that the ordinal ranking should remain the same (so that people agree more strongly with the first text than the second, and believe more strongly that people on the other side of the planet matter just as much than that those in the distant future do).

comment by Max_Daniel · 2019-09-16T22:38:36.238Z · score: 15 (7 votes) · EA · GW

Interesting, thanks for publishing!

Just a quick note: Your finding about population ethics - i.e. that many prefer smaller populations - is consistent with findings reported in the following paper.

Spears, D. (2017, 05). Making people happy or making happy people? Questionnaire-experimental studies of population ethics and policy. Social Choice and Welfare, 49(1), 145-169. doi:10.1007/s00355-017-1055-7

Among other things, Spears also finds that women prefer smaller populations more strongly than men.

I learned all of this from an unpublished literature review by David Althaus, which might be interesting for your purposes if you haven't seen it. [ETA: Actually David has publicly linked to the lit review - see subsection "An experimental study of population ethics and policy", pp. 23ff., for a summary of Spears (2017) - in this EA Forum post. [EA · GW]]

(I'm not mentioning this to argue for any view. I'm very sympathetic to the total view in population ethics, and I agree with your interpretation that many subjects simply failed to understand the scenario in the intended way.)

comment by Aron_Vallinder · 2019-09-18T05:22:35.940Z · score: 2 (2 votes) · EA · GW

Thanks for the pointer!

comment by Jamie_Harris · 2019-09-22T12:35:51.775Z · score: 8 (3 votes) · EA · GW

Wow, some fascinating and surprising answers, e.g. that there was more support for the statement "I hope that in the future, humanity will spread to other solar systems" than support for a population of 10bn rather than a population of 1bn. I was also interested in the finding that "Valuing spatially distant people was correlated with valuing temporally distant people (r = 0.63, p < 3e-16)."

Beyond some of the discussion around the question wording raised by others, I'm also wondering why you chose to present people with these two articles, rather than just running the survey without any accompanying information? I'm not sure what was gained by providing people with this information and I think it makes the answers less representative and useful. For example, you state that the results "suggest that people generally view some of the core ideas of longtermism in a favorable light." I would more cautiously claim that the results "suggest that people who have just read two articles that are favorable to longtermism generally accept some of the core ideas of longtermism, at least temporarily."

I think I would have found this more useful either as a nationally representative survey (e.g. using Ipsos), to explore what people currently think, while attempting to minimise the effects of the survey design on the answers, or as an RCT, where a control group (no article) is compared to 1+ intervention group(s), testing for the effectiveness of possible pro-longtermist messaging on people's attitudes.

(But, to clarify, as a quick survey via Positly, which is cheaper than using Ipsos, I do think that these findings are still useful and interesting.)

comment by WilliamKiely · 2019-09-19T00:15:02.113Z · score: 5 (3 votes) · EA · GW
However, we also collected qualitative responses to this question, and found that many of the people who preferred the smaller civilization over the bigger were unwilling to accept the stipulations.

Perhaps a way to avoid this problem would be to use numbers that are both significantly less than the current population, such as 2B vs 3B rather than 1B vs 10B.

comment by antimonyanthony · 2019-09-17T21:57:35.925Z · score: 3 (2 votes) · EA · GW

Question 13 seems under-specified to me, specifically this part: "Their members are equally happy." Does this mean their level of welfare is the same, but it could be at any level for the purposes of this question? Does the use of "happy" in particular mean the question assumes this constant level of welfare is net positive? Could the magnitudes of happiness and suffering differ between people as long as the "net welfare" is positive, assuming it's possible to make that aggregation?

I think these questions matter because they influence your interpretation of the answers as either a result of population ethical factors, or other things like the respondents' beliefs about the moral weight of happiness vs suffering. Someone could coherently accept totalism yet consider the smaller world better if, for instance, they think the higher number of cases of the extreme tails of suffering in the larger population (just because there are more people that things could go very wrong for) makes it worse.

A priori I expect suffering focused intuitions to be in the minority, but in any case it's not obvious that the answers to #13 reveal non-totalist or irrational population ethics among the respondents.

comment by Max_Daniel · 2019-09-17T22:41:46.583Z · score: 6 (2 votes) · EA · GW

(I think "if the contributive axiological value of people is negative, then preferring smaller populations is consistent with - indeed, implied by - totalism in population ethics" is a valid point, and obviously so. It is also mentioned by Spears in the paper I cite above. I therefore find it quite irritating that the parent comment was apparently strongly downvoted. Curious if I'm missing a reason for this?

NB I also think the point is trivial and has an implausible premise, but IMO it is the hallmark of good philosophy that each individual statement seems trivial - e.g., Reasons and Persons features an ample amount of such claims that might strike some readers as trivial or pedantic.)

comment by Stefan_Schubert · 2019-09-18T08:17:04.579Z · score: 3 (2 votes) · EA · GW

Cf. Russell:

The point of philosophy is to start with something so simple as not to seem worth stating, and to end with something so paradoxical that no one will believe it.

comment by antimonyanthony · 2019-09-17T23:54:05.849Z · score: 0 (0 votes) · EA · GW

"Trivial, but in a Derek Parfit way" is honestly the highest compliment I could ever receive.

comment by Aron_Vallinder · 2019-09-18T05:25:52.730Z · score: 1 (1 votes) · EA · GW

Thanks, this is a good point. From looking at the qualitative answers that people provided in response to this question, it doesn't appear to have been much of an issue in practice, however.

comment by antimonyanthony · 2019-09-18T15:08:52.431Z · score: 0 (0 votes) · EA · GW

I see, thank you - wasn't sure what might have been hidden in "Other." :)