Ben Grodeck: Cooperating With Future Generations — An Experimental Investigationpost by EA Global Transcripts (The Centre for Effective Altruism) · 2020-02-21T15:28:11.536Z · score: 13 (5 votes) · EA · GW · None comments
If you take actions that affect the future, you don’t just change the eventual welfare of people who have yet to exist — you actually influence which people exist in the first place. Typical moral principles, when applied to such actions, yield paradoxical results.
In this academic session, Ben Grodeck, a PhD candidate in economics at Monash University, discusses the results of an experiment in which individuals came face-to-face with this moral puzzle — and how an “identity-affecting” task led them to be less generous.
Below is a transcript of Ben’s talk, which we’ve lightly edited for clarity. You can also watch the talk on YouTube or read it on effectivealtruism.org.
I’ll be discussing a paper I've been working on with three philosophers: Justin Bruner [at the University of Arizona], Toby Handfield at Monash University, and Matt Kopec at the Australian National University. I am an economist by training.
If you read the Intergovernmental Panel on Climate Change's (IPCC’s) fifth report from 2014, you'll come across this striking paragraph:
Some of you may recognize that this paragraph is referring to something called the nonidentity problem, which I’ll explain in a moment. But first, I want to share a quick anecdote. A few weeks ago, I had the privilege to sit next to the eminent moral philosopher John Broome at a dinner in Oxford. John is one of the lead authors of [the IPCC report mentioned above]. I took the opportunity to ask him why he decided to put that paragraph into a policy document. To my surprise, he started shaking his head. He said that it was one of his co-authors who put that paragraph into the report; he hadn’t wanted to.
I asked him why. John gave me two reasons. First, he said policymakers would just think philosophers are crazy. Second, and more importantly, he told me he was concerned that policymakers and ordinary people would come across that line of argument and [use it to justify doing nothing] to mitigate climate change or improve the welfare of future generations.
In our study, we investigate whether John Broome has a right to be concerned about that paragraph in the IPCC report.
First, let's take a step back. When we can choose between different types of policies that affect the future, we generate something called intergenerational social dilemmas. These dilemmas exist because there's a welfare trade-off between the current generation and the future generation. And just to clarify, by “future generation” we mean people who will come into existence more than 100 years into the future. If there were no trade-offs between the current generation and the future generation in terms of welfare, there would be no dilemma, since there'd be a clear course of the best actions to take.
But with a lot of situations that arise, it’s clear that this dilemma exists. To illustrate this, consider climate change. Using a toy model to input two simple policies, we can choose to conserve by implementing a carbon tax, restricting our resources, and mitigating fossil fuels that are emitted. Or, we can say, "Screw it. Let's just consume as much as possible. We don't care about the effects in the future."
In this model, if we choose to conserve, we’ll have 10 units of welfare. And in the future, there'll be a pristine environment, so the future will have 10 units of welfare as well. However, if we choose to consume [versus conserve], we’ll have two extra units of welfare, but in the future, there'll be environmental degradation and pollution. As a result, they'll only have two units of welfare.
Another key component of these dilemmas is that only the present generation has any agency. The future generation is at the mercy of our decision right now. In this model, it is quite clear that by choosing to consume instead of to conserve, we're harming the future generation. If we had chosen to conserve, the future generation would've had 10 units of welfare. Choosing to consume leaves them with only two units of welfare. So, for simple pro-social preferences, such as not wanting to cause harm, we have a powerful argument for choosing to conserve instead of consume.
However, when we make these decisions, we not only affect the welfare of future generations; we also affect who comes into existence. So, we call these types of decisions identity-affecting decisions. They occur when different individuals come into existence due to different policy choices.
For example, let's go back over 100 years into a counterfactual world in which cars were never invented. In this world, people would have had very different financial situations. Their consumption decisions would be different, their commuting decisions would be different, and there would be myriad causal consequences. It’s highly unlikely that your parents would have ever met. And even if your parents had met, it is highly unlikely that they would have chosen to procreate on the exact same day. And even if they had chosen to procreate on the exact same day that you were conceived, it is highly unlikely that the exact same sperm cell and the exact same ovum would've come together to create you.
As a result, I suggest that it is highly unlikely that all of us who exist today would have existed if cars weren't made. And as [the philosopher Derek] Parfit says, "How many of us can truly claim that even if railways and motor cars had never been invented, 'I would still have been born'?"
So, taking this into account, here is a better model of our intergenerational social dilemmas, with climate change as our example.
Our choice to consume instead of conserve would change people's way of life, lead to different consumption decisions, etc., and thus alter the identities of people who would exist in this future. So, even though choosing to consume results in pollution and environmental degradation that yield only two units of welfare, if we had chosen to conserve instead of consume, some people wouldn't exist. There would be a completely different set of people in this hypothetical future. Therefore, [consuming instead of conserving] actually results in one set of future people's best-case scenario.
If all we care about is not harming a future generation, this seems to generate quite a powerful argument for us to consume. The people [in this scenario] can't be made worse off, and [those of us alive now] get two extra units of welfare. So, why don't we just choose to consume and not do anything about climate change?
Many moral philosophers have regarded this conclusion as absurd. They focus on advocating for conserving instead of consuming using principles that aggregate welfare, regardless of identity.
However, there's been very little investigation into people's attitudes, psychology, and — most importantly — their behavior when facing these identity-affecting decisions. So, in our study, we wanted to investigate what ordinary people believe and how they behave in this context. To do this, we ran an experiment simulating these identity-affecting decisions. We used ignorance of the decision as a proxy for non-existence.
These experiments are our attempt to answer two main questions: Do people behave more selfishly in an identity-affecting context compared to a normal decision context? And if so, is this increase in selfish behavior motivated by different normative principles or judgments, or is it due to excuse-driven behavior?
In other words, do people know that it's wrong but still think, "I can get away with it without feeling guilt about acting selfishly”?
Although it would have been quite interesting to generate a real identity-affecting decision in the lab, we would've had to wait for more than 100 years to collect the data. Also, I doubt it would have been ethical. So, we tried to capture people's intuitions and behaviors by using the proxy of ignorance for non-existence. And while this isn't a true version of the nonidentity problem, I believe that we still captured valuable insights about people's beliefs and actions.
We wanted to compare the choices of people in our control group to those of people in our treatment group. In our control group, people would come into the lab and we'd tell them that they were being matched with a unique potential recipient of $10. All they had to do was make a simple choice between two options:
1. Both the decision maker and the recipient would receive $10 each; or
2. The decision maker could take an extra $2, and the recipient's endowment would be reduced to $2. A week later, the recipient would come into the lab and they'd have full knowledge of the decision made by the decision maker who had been matched with them.
In our identity-affecting treatment group, our decision maker, once again, would come into the lab and we'd tell them that they'd be matched with two unique recipients, but these recipients had no idea they were a part of the experiment. (A lot of logistics went into this, but it was really important in order to generate the insights we were seeking.) Once again, we told our decision makers they had two choices. They could choose between two options:
1. The decision maker and the first recipient would get $10. The second recipient would get nothing — and would never be informed that the experiment had ever taken place.
2. The decision maker could take an extra $2 for themselves. The first recipient’s endowment would be reduced to $0, but they would never be informed that the experiment had ever existed. And in this case, the second recipient would get $2 and would be informed of the experiment.
In the control case, it's quite clear that there's a harm occurring. You reduce the recipient's endowment from $10 to $2 and they're fully aware that they were harmed. You can clearly say that they have been harmed.
In the treatment case, yes, you've reduced one recipient’s endowment from $10 to $0, but if you believe in belief-dependent utility, you can say, "What you don't know can't harm you." Our hope was to generate some insights into how people behave in identity-affecting situations.
I was shocked by the results. In our control case, we found that 26% of people chose to take the extra $2. But in our identity-affecting treatment, 62% of people chose the selfish option. That's a 240% difference, or a 36 percentage-point difference, whichever one you prefer.
We also wanted to know: Do people genuinely believe that they're justified in making the selfish decision — do they genuinely believe that there's nothing wrong with it? Or is it excuse-driven behavior? We wanted to elicit people's normative beliefs — in other words, what people think other people think is the right thing to do — because, as economists, we care a lot about incentive compatibility. We care about people revealing their true preferences, [which we can access by eliciting] second-order beliefs [beliefs about others’ beliefs].
We used an incentive-compatible method called the Krupka and Weber method. When new [study participants] came into the lab, we gave them either the control scenario or the treatment scenario and asked, "How socially or morally acceptable do you think other people think it is to choose option two: very socially acceptable, somewhat socially acceptable, not socially acceptable, or very socially unacceptable?” We also told them, "If you pick the modal answer — the answer with the most responses — then we’ll pay you $10. If you choose any other answer, you get paid nothing.” We did this to understand the norms [guiding people’s behavior].
What we found is quite interesting. Remember: We're talking about option two here, taking the extra $2. In the identity-affecting case, many people thought it was very socially unacceptable to choose option two. Yet, 62% of people did it. I think it's quite clear that there's some excuse-driven behavior going on here. We have some other measures that complement this analysis, but I don't have time to talk about those today.
Some of you may have realized that we used a bit of trickery. We made two changes in the second experiment. We added a second recipient and an element of ignorance to the decision as well. As a result, there are two possible mechanisms that we think could be behind this excuse-driven behavior:
1. There are no “aggrieved witnesses”; the person that you've harmed doesn't know that they're being harmed. So for some people, that means you haven't harmed them at all. That could be driving people's decisions to take the extra $2.
2. The false trade-off. You can only benefit one of the two recipients. And when you're unsure which recipient will benefit, you may use that as an excuse to just maximize your own payoff.
There's a really easy way to test this: We removed [the recipients’ ignorance of the harm that had been done to them] from our identity-affecting treatment.
Our decision makers were told, "You're matched with two people. They don't know they're a part of this experiment, but once you make your decision, both of them will come into the lab — even the one who gets $0." We had some people come into the lab and paid them nothing, because we really wanted to untangle these mechanisms. It allowed us to determine whether it [had been the recipients’ ignorance of the decision] driving the [selfish] behavior or if it was the fact that the decision maker could only give one of the two recipients something, [providing] an excuse to take $12.
I was arguing with my co-authors the whole time, saying, "It's definitely going to be ignorance [that is driving people’s decisions]." Ignorance plays a large role in people's decisions in the behavioral economics literature.
Fortunately, I didn't bet on it. When we removed the “ignorance” element, 59% of people still chose the selfish option. There is no statistically significant difference between our revealed treatment and our identity-affecting treatment. And it seems like all the work in selfish behavior is being done just by having this counterfactual trade-off, which is really interesting for a number of reasons.
What's the upshot? What we try to do in our paper is simply investigate people's beliefs and behavior in identity-affecting contexts. We wanted to [uncover] their intuitions. And our main findings are, quite clearly, that people act more selfishly in our simulated identity-affecting contexts — despite the normative belief that [doing so] is morally and socially inappropriate. I think it’s fascinating when beliefs diverge from actions. And we find evidence that the reason for it is excuse-driven behavior as a result of forced trade-offs.
We weren't expecting to go down this path, but forced trade-offs are ubiquitous. They're not just present in identity-affecting situations. Think about redistribution policies today [to address] inequality. Think about free trade versus tariffs. These situations have winners and losers, and when that’s the case, we need to explore whether people just defer to what benefits them the most.
Finally, I want to go back and reiterate the concerns of John Broome. I feel a bit weird giving this presentation, especially if it's put up on YouTube, because this might be dangerous research to be doing in terms of [how it might influence] people's folk attitudes [popular beliefs]. I have a strong prior [experiment showing] that people don't think about all of the counterfactuals when [implementing] policies. If we disseminate this information and people start thinking differently, then maybe we'd lower the probability of dealing with problems that affect the long-term future, like climate change.
Hopefully, we can find a way to mitigate these behaviors — and no one becomes selfish because of our paper! But I do have some concerns. Thank you very much.
Phil Trammell [Moderator]: Thanks, Ben. In general, behavioral economics has a lot to teach us about how people will respond to these long-termist projects that we in the EA movement think about a lot, but that most people do not. (We often talk as if people are thinking along either longtermist lines or short-termist lines, but of course, there's a broad spectrum. No one's ever really on either end, and it's good to be aware of these [behavioral] quirks in ourselves, as well, so we can learn to better discipline our own altruistic behavior and not fall prey to convenient excuses.)
Your sequence of experiments does an excellent job of posing three questions and a good job of answering the first two. First, it asks whether people trade off interpersonal benefits [affecting two or more people] differently from intrapersonal ones [those affecting just one person], and finds that they do. Second, it establishes that they do so not out of a sincere belief that there's a morally important difference between the cases, but as an excuse. And third, it identifies two possible excuses they could be using, which you call the “forced trade-offs reason” and the “aggrieved witness reason,” and it rules out the latter. If you reveal to people that they were denied a potential distribution, decision makers still choose, in a sense, to “consume.”
I do have two concerns with where you go from there, though. It seems to me that there are excuses people might use beyond the two that you list, so ruling out the second doesn't imply the first. For instance, instead of the necessity of making a trade-off between people justifying a selfish act, it could be that something neatly symmetric to your aggrieved witness hypothesis is really what's going on. That is, it could be that the presence of a _thankful_ witness somewhat justifies the behavior. Depriving one person of the $10 feels okay, because if they ever confront us, we can always say, "Take it up with Person B. I helped them and they like [what I did.]" So, within a generation, the forced trade-offs excuse and what I'm calling the thankful witness excuse will always line up.
This brings me to my second concern: The analogy between these experiments and our position with respect to future generations could be a bit tenuous. Future generations living in a climate-damaged world will know that the climate damages are our fault. There will be aggrieved witnesses. And justifiably or unjustifiably, I'm sure they will, in fact, complain about the damages, just as we currently complain about many of the things our ancestors did, even though we wouldn't have existed if they hadn't done them.
Realistically, the person in your experiment [who receives $2] doesn't have a right to complain, because they would've gotten nothing. But, as a matter of psychology, future people [in that position] will probably feel aggrieved. And in the future case, there will be no thankful witness to offset the grievance. People will still care just as much about climate change after realizing that it's an identity-affecting case, because there will be people with grievances against us and there won't be anyone, in a sense, thanking us.
You might say, "Okay, but no one is actually around now to complain in the way that the person [in your experiment] would." But that's true for all decisions that affect future generations, regardless of whether they'd be identity-affecting. So your experiment could be shedding light on how people think about long-term decisions, rather than identity-affecting decisions.
Likewise, if the aggrieved witness hypothesis had been supported — that is, if people had been fine with giving to one person only if the person they didn’t give to was kept in the dark — then that wouldn't have pinned down how people think about future generations. Again, future generations take the place of the small beneficiary and they will be aggrieved witnesses. You end up projecting the aggrieved witness theory anyway, but I think the study leaves it open either way.
Ben: You just said a lot there, so I'll try and respond to a few of [your points]. First, we elicited beliefs about how grateful or angry decision makers think recipients would be about getting nothing, so we have data on that. [We did that] quite recently, which is why I didn't include it. I apologize for that. But we found no real difference in how grateful the $2 recipient was relative to the $10 recipient, and I don't think there was any real difference in anger either.
So, we always thought that choosing the second option would not only mean that one person wouldn’t ever know that they had been harmed, but also that the recipient would be grateful that they got something — since, if the decision maker had chosen the other option, they wouldn’t have. However, we didn't find that in the data.
Second, I didn't want to exclude the fact that [not having] an aggrieved witness would be doing some of the work [to make people behave selfishly]. We could have created a situation in our control case [to account for that] and I'm sure we'd find an increase in selfish behavior; it’s not that we reject the “no aggrieved witness” hypothesis. I think both mechanisms do drive [the selfish outcome]. But I think it's quite clear that [the forced trade-off], on its own, is doing enough to make people selfish. That's the important insight that we generated.
Second Moderator: Let’s go on to some audience questions. One audience member asks, "I'd be interested in including shared identity in the experiments — for instance, framing both of the people affected as part of the same in-group as the participants. In this case, it would be more similar to our descendants than random strangers, even if our descendants are, in fact, random strangers. Do you think this would have an effect?"
Ben: Yeah, that's really interesting. I wish we had more of a budget to explore these types of questions. I think it would. I think people's moral circles are quite narrow and our experiment was completely anonymous. You had no idea with whom you were matched; you just knew you were matched with someone, and that it was a causal chain of matching, not someone randomly assigned after you made your decision.
Even if we had just shown a picture of the people you were matched with, that might have had an effect. If we had used someone that the decision maker was friends with as one of the participants, and a stranger as another participant, I'm sure that would have had some effects on behavior as well.
Second Moderator: Another audience member asks, "What does ‘forced trade-offs’ mean in this context? Do you perhaps have some examples of types of forced trade-offs that exist in the climate change model that you presented?"
Ben: By “forced trade-off,” we simply mean you can only benefit one of the recipients. You can't benefit both. In a lot of economic policy decisions, we say that they’re Pareto optimal, because no matter how you change things around, there are going to be winners and losers. That's why we have so many policy debates today.
For example, with redistribution, we can [make things better for] the people who are worse off in terms of income inequality. But that means some people are going to be made worse off relative to where they are now.
So in regards to climate change, something like a carbon tax will mitigate our consumption now — and lower our wealth now. There will be some losers. And when you frame it in that context — when someone's going to counterfactually be made better or worse off, no matter which decision you make — it seems like, from what we've found, people just defer to benefiting themselves.
I'm really interested in exploring this in a policy case, with something like free trade or wealth distribution.
Comments sorted by top scores.