On future people, looking back at 21st century longtermism

post by Joe_Carlsmith · 2021-03-22T08:21:04.205Z · EA · GW · 13 comments

Contents

  I. Holy sh** the future
  II. Holy sh** the past
  III. Shared reality
  IV. Narratives and mistakes
None
13 comments

(Cross-posted from Hands and Cities)

“Who knows, for all the distance, but I am as good as looking at you now, for all you cannot see me?”

– Whitman, Crossing Brooklyn Ferry

Roughly stated, longtermism is the thesis that what happens in the long-term future is profoundly important; that we in the 21st century are in a position to have a foreseeably positive and long-lasting influence on this future (for example, by lowering the risk of human extinction and other comparable catastrophes); and that doing so should be among the key moral priorities of our time.

This post explores the possibility of considering this thesis — and in particular, a certain kind of “holy sh**” reaction to its basic empirical narrative — from the perspective of future people looking back on the present day. I find a certain way of doing this a helpful intuition pump.

I. Holy sh** the future

“I announce natural persons to arise,
I announce justice triumphant,
I announce uncompromising liberty and equality,
I announce the justification of candor and the justification of pride…

O thicker and faster—(So long!)
O crowding too close upon me,
I foresee too much, it means more than I thought…”

– Whitman, So Long!

I think of many precise, sober, and action-guiding forms of longtermism — especially forms focused on existential risk in particular — as driven in substantial part by a more basic kind of “holy sh**” reaction, which I’ll characterize as follows:

  1. Holy sh** there could be a lot of sentient life and other important stuff happening in the future.
  2. And it could be so amazing, and shaped by people so much wiser and more capable and more aware than we are.
  3. Wow. That’s so crazy. That’s so much potential.
  4. Wait, so if we mess up and go extinct, or something comparable, all that potential is destroyed? The whole thing is riding on us? On this single fragile planet, with our nukes and bioweapons and Donald Trumps and ~1.5 centuries of experience with serious technology?
  5. Do other choices we make influence how that entire future goes?
  6. This is wild. This is extremely important. This is a crazy time to be alive.

This sort of “holy sh**” reaction responds to an underlying empirical narrative — one in which the potential size and quality of humanity’s future is (a) staggering, and (b) foreseeably at stake in our actions today.

Conservative versions of this narrative appeal to the spans of time that we might live on earth, and the number of people who might live during that time. Thus, if earth will be habitable for hundreds of millions of years, and can support some ten billion humans per century, some 10^16 humans might someday live on earth — ~a million times more than are alive today.

I’m especially interested here, though, in a less conservative version: in which our descendants eventually take to the stars, and spread out across our own galaxy, and perhaps across billions of other galaxies — with billions or even trillions of years to do, build, create, and discover what they see as worth doing, building, creating, and discovering (see Ord (2020), Chapter 8, for discussion).

Sometimes, a lower-bound on the value at stake in this sort of possibility is articulated in terms of human lives (see e.g. Bostrom (2003)). And as I wrote about last week, I think that other things equal, creating wonderful human lives is a deeply worthwhile thing to do. But I also think that talking about the value of the future in terms of such lives should just be seen as a gesture — an attempt to point, using notions of value we’re at least somewhat familiar with, at the possibility of something profoundly good occurring on cosmic scales, but which we are currently in an extremely poor position to understand or anticipate (see the section on “sublime Utopias” here).

Indeed, I think that breezy talk about what future people might do, especially amongst utilitarian-types, often invokes (whether intentionally or no) a vision of a future that is somehow uniform, cold, metallic, voracious, regimented — a vision, for all its posited “goodness” and “optimality” and “efficiency,” that many feel intuitively repelled by (cf. the idea of “tiling” the universe with something, or of something-tronium — computronium, hedonium, etc).

This need not be the vision. Anticipating what future people will actually do is unrealistic, but I think it’s worth remembering that for any particular cosmic future you don’t like, future people can just make a better one. That is, the question isn’t whether some paper-thin, present-day idea of the cosmic future is personally appealing; or whether one goes in, more generally, for the kind of sci-fi aesthetic associated with thinking about space travel, brain emulations, and so forth. The question is whether future people, much wiser than ourselves, would be able to do something profoundly good on cosmic scales, if given the chance. I think they would. Extrapolating from the best that our current world has to offer provides the merest glimpse of what’s ultimately possible. For me, though, it’s more than enough.

But if we consider futures on cosmic scales — and we assume that the universe is not inhabited, at the relevant scales, by other intelligent life (see here for some discussion) — then the numbers at stake quickly get wildly, ridiculously, crazily large. Using lives as a flawed lower-bound metric, for example, Bostrom (2003) estimates that if the Virgo Supercluster contains, say, ten thousand billion stars, and each star can support at least ten billion biological humans, the Virgo Supercluster could support more than 10^23 humans at any one time. If roughly this sort of population could be sustained for, say, a hundred billion years, then at ~100 years per life, this would be some 10^32 human lives. And if we imagine forms of digital sentience instead of biological life, the numbers balloon even more ludicrously: Bostrom (2014, Chapter 6), for example, estimates 10^58 life-equivalents for the accessible universe as a whole. That is, 10,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000.

Once we start talking about numbers like this, lots of people bounce off entirely — not just because the numbers are difficult to really grok, or because of the aesthetic reactions just discussed, but because the numbers are so alien and overwhelming that one suspects that any quantitative (and indeed, qualitative) ethical reasoning that takes them as inputs will end up distorted, or totalizing, or inhuman.

I think hesitations of this kind are very reasonable. And importantly, the case for working to improve the long-term future, or to reduce existential risk, need not depend on appeals to astronomical numbers. Indeed, as Ord (2020) discusses, existential risk seems like an important issue from variety of perspectives. Nor need we countenance any sort of totalizing or inhuman response to the possibility of a future on cosmic scales.

But I also don’t think we should ignore or dismiss this possibility, just because the numbers in question are so unthinkably large. To the contrary: I think that the possibility of a future on cosmic scales is a very big deal.

Of course, the possibly immense value at stake in the long-term future is not, in itself, enough to get various practically-relevant forms of longtermism off the ground. Such a future also needs to be adequately large in expectation (e.g., once one accounts for ongoing risk of events like extinction), and it needs to be possible for us to have a foreseeably positive and sufficiently long-lasting influence on it. There are lots of open questions about this, which I won’t attempt to address here.

Rather, following Ord (2020), I’m mostly going to focus on an empirical picture in which the future is very large and positive in expectation, and in which we live during a period of danger to it unprecedented in the ~200,000-year history of our species — a period in which we are starting to develop technologies powerful enough to destroy all our potential, but where we have not yet reached the level of maturity necessary to handle such technologies responsibly (Ord calls this “The Precipice”). And I’ll assume, following Ord, that intentional action on the part of present-day humans can make a meaningful difference to the level of risk at stake.

Granting myself this more detailed empirical picture is granting a lot. Perhaps some will say: “well, obviously if I thought that humanity’s future was immense and amazingly good in expectation, and that there’s a substantive chance it gets permanently destroyed this century, and that we can lower the risk of this in foreseeable and non-trivial ways, I would be on board with longtermism. It’s just that I’m skeptical of those empirical premises for XYZ reason.” And indeed, even if we aren’t actively skeptical for particular, easily-articulable reasons, our intuitive hesitations might encode various forms of empirical uncertainty regardless. (See, e.g., epistemic learned helplessness for an example of where heuristics like this might come from. Basically, the idea is: “arguments, they can convince you of any old thing, just don’t go in for them roughly ever.”)

Skepticism of that kind isn’t the type I’m aiming to respond to here. Rather, the audience I have in mind is someone who looks at this empirical picture, believes it, and says: “meh.” My view is that we should not say “meh.” My view is that if such an empirical picture is even roughly right, some sort of “holy sh**” reaction, in the vein of 1-6 above, is appropriate, and important to remain in contact with — even as one moves cautiously in learning more, and thinking about how to respond practically.

What’s more, I think that imagining this empirical picture from the perspective of the future people in question can help make this sort of reaction intuitively accessible.

II. Holy sh** the past

“I have sung the body and the soul, war and peace have I sung, and the songs of life and death,
And the songs of birth, and shown that there are many births.”

– Whitman, So Long!

To get at this, let’s imagine that humans and their descendants do, in fact, go on to spread throughout the stars, and to do profoundly good things on cosmic scales, lasting hundreds of billions of years. Let’s say, for concreteness, that these good things look something like “building complex civilizations filled with wonderful forms of conscious life” — though this sort of image may well mislead.

And let’s imagine, too, that looking back, our descendants can see that there were in fact serious existential risks back in the 21st century — risks that irresponsible humans could exacerbate, and responsible humans foreseeably reduce; and that had humanity succumbed to such a risk, no other species, from earth or elsewhere, would ever have built a future of remotely comparable scale or value. What would these descendants think of the 21st century?

When I imagine this, I imagine them having a “holy sh**” reaction akin to the one I think of 21st-century longtermists as having. That is, I imagine them looking backwards through the aeons, and seeing the immensity of life and value and consciousness throughout the cosmos rewind and un-bloom, shrinking, across breathtaking spans of space and time, to an almost infinitesimal point — a single planet, a fleck of dust, where it all started. What Yudkowsky (2015) calls “ancient earth.”

Sometimes I imagine this as akin to playing backwards the time-lapse growth of an enormous tree, twisting and branching through time and space on cosmic scales — a tree whose leaves fill the firmament with something lush and vast and shining; a tree billions of years old, yet strong and intensely alive; a tree which grew, entirely, from one tiny, fragile seed.

And I imagine them zooming in on that seed, back to the very early history of the species that brought the cosmos to life, to the period just after their industrial revolution, when their science and technology really started to take off. A time of deep ignorance and folly and suffering, and a time, as well, of extreme danger to the entire future; but also a time in which life began to improve dramatically, and people began to see more clearly what was possible.

What would they think? Here I think of Carl Sagan’s words:

“They will marvel at how vulnerable the repository of all our potential once was, how perilous our infancy, how humble our beginnings, how many rivers we had to cross, before we found our way.”

Or, more informally, I imagine them going: “Whoa. Basically all of history, the whole thing, all of everything, almost didn’t happen.” I imagine them thinking about everything they see around them, and everything they know to have happened, across billions of years and galaxies — things somewhat akin, perhaps, to discoveries, adventures, love affairs, friendships, communities, dances, bonfires, ecstasies, epiphanies, beginnings, renewals. They think about the weight of things akin, perhaps, to history books, memorials, funerals, songs. They think of everything they love, and know; everything they and their ancestors have felt and seen and been a part of; everything they hope for from the rest of the future, until the stars burn out, until the story truly ends.

All of it started there, on earth. All of it was at stake in the mess and immaturity and pain and myopia of the 21st century. That tiny set of some ten billion humans held the whole thing in their hands. And they barely noticed.

III. Shared reality

“What is it then between us?
What is the count of the scores or hundreds of years between us?”

– Whitman, Crossing Brooklyn Ferry

There is a certain type of feeling one can get from engaging with someone from the past, who is writing about — or indeed, writing to — people in the future like yourself, in a manner that reflects a basic picture of things that you, too, share. I’ll call this feeling “shared reality” (apparently there is some sort of psychological literature that uses this term, and it’s used in practices like Circling as well, but I don’t necessarily have the meaning it has in those contexts in mind here).

I get this feeling a bit, for example, when I read this quote from Seneca, writing almost 2,000 years ago (quote from Ord (2020), Chapter 2):

“The time will come when diligent research over long periods will bring to light things which now lie hidden. A single lifetime, even though entirely devoted to the sky, would not be enough for the investigation of so vast a subject… And so this knowledge will be unfolded only through long successive ages.”

Reading this, I feel a bit like saying to Seneca: “Yep. You got the basic picture right.” That is, it seems to me like Seneca had his eye on the ball — at least in this case. He knew how much he didn’t know. He knew how much lay ahead.

I feel something similar, though less epistemic, and more interpersonal, with Whitman, who writes constantly about, and to, future people (thanks to Caroline Carlsmith for discussion and poem suggestions, and for inspiring this example; see also her work in response to Whitman, here). See, e.g., here:

“Full of life, sweet-blooded, compact, visible,
I forty years old the Eighty-third Year of The States,
To one a century hence, or any number of centuries hence,
To you, yet unborn, these, seeking you.

When you read these, I, that I was visible, am become invisible;
Now it is you, compact, viable, realizing my poems, seeking me,
Fancying how happy you were, if I could be with you, and become your lover;
Be it as if I were with you. Be not too certain but I am now with you.”

And here:

“Others will enter the gates of the ferry and cross from shore to shore,
Others will watch the run of the flood-tide,
Others will see the shipping of Manhattan north and west, and the heights of Brooklyn to the south and east,
Others will see the islands large and small;
Fifty years hence, others will see them as they cross, the sun half an hour high,
A hundred years hence, or ever so many hundred years hence, others will see them,
Will enjoy the sunset, the pouring-in of the flood-tide, the falling-back to the sea of the ebb-tide…

It avails not, time nor place—distance avails not,
I am with you, you men and women of a generation, or ever so many generations hence,
Just as you feel when you look on the river and sky, so I felt,
Just as any of you is one of a living crowd, I was one of a crowd,
Just as you are refresh’d by the gladness of the river and the bright flow, I was refresh’d,
Just as you stand and lean on the rail, yet hurry with the swift current, I stood yet was hurried…

What thought you have of me now, I had as much of you—I laid in my stores in advance,
I consider’d long and seriously of you before you were born.”

That is, it feels like Whitman is living, and writing, with future people — including, in some sense, myself — very directly in mind. He’s saying to his readers: I was alive. You too are alive. We are alive together, with mere time as the distance. I am speaking to you. You are listening to me. I am looking at you. You are looking at me.

If the basic longtermist empirical narrative sketched above is correct, and our descendants go on to do profoundly good things on cosmic scales, I have some hope they might feel something like this sense of “shared reality” with longtermists in the centuries following the industrial revolution — as well as with many others, in different ways, throughout human history, who looked to the entire future, and thought of what might be possible.

In particular, I imagine our descendants looking back at those few centuries, and seeing some set of humans, amidst much else calling for attention, lifting their gaze, crunching a few numbers, and recognizing the outlines of something truly strange and extraordinary — that somehow, they live at the very beginning, in the most ancient past; that something immense and incomprehensible and profoundly important is possible, and just starting, and in need of protection.

I imagine our descendants saying: “Yes. You can see it. Don’t look away. Don’t forget. Don’t mess up. The pieces are all there. Go slow. Be careful. It’s really possible.” I imagine them looking back through time at their distant ancestors, and seeing some of those ancestors, looking forward through time, at them. I imagine eyes meeting.

IV. Narratives and mistakes

It appears to me I am dying.   

Hasten throat and sound your last,   
Salute me—salute the days once more. Peal the old cry once more.  

– Whitman, So Long!

To be clear: this is some mix between thought experiment and fantasy. It’s not a forecast, or an argument. In particular, the empirical picture I assumed above may just be wrong in various key ways. And even if it isn’t, future people need not think in our terms, or share our narratives. What’s salient to them may be entirely different from what’s salient to us. And regardless of the sympathy they feel towards post-industrial revolution longtermists, they will be in a position to see, too, our follies and mistakes, our biases and failures; what, in all of it, was just a game, something social, fanciful, self-serving — but never, really, real.

Indeed, even if longtermists are right about the big picture, and act reasonably in expectation, much if not all of what we try to do in service of the future will be wasted effort — attempts, for example, to avert catastrophes that were never going to happen, via plans that were never going to succeed. Future people would see this, too. And they would see the costs. They’d see what was left undone — what mistakes, and waste, and bad luck meant. And they would see everything else that mattered about our time, too.

Indeed, in bad cases, they might see our grand hopes for the future as naive, sad, silly, tragic — a product of a time before it all went so much more deeply wrong, when hope was still possible. Or they won’t exist to see us at all.

I’m mostly offering this image of future people looking back as a way of restating the “holy sh**” reaction I described in section I, through a certain lens. I’m not sure if it will land for anyone who didn’t have that reaction in the first place. But I find that it makes a difference for me.

13 comments

Comments sorted by top scores.

comment by JP Addison (jpaddison) · 2021-03-25T08:08:32.032Z · EA(p) · GW(p)

I love this, thanks for writing it. And I’ve generally been enjoying  your posts, thanks for writing them. Only tangentially  related to the core of the post, but:

[...] the numbers are so alien and overwhelming that one suspects that any quantitative (and indeed, qualitative) ethical reasoning that takes them as inputs will end up distorted, or totalizing, or inhuman.

This does seem like an important crux for longtermism and I’d be interested in more attempts to explore it.

comment by Erich_Grunewald · 2021-03-22T19:43:10.388Z · EA(p) · GW(p)

This got me thinking a bit about non-human animals. If it's true that (1) speciesism is irrational & there's no reason to favour one species over another just because you belong to that species; (2) the human species is or could very well be at a very early stage of its lifespan; & (3) we should work very hard to reduce prospects of a future human extinction, then shouldn't we also work very hard to reduce prospects of animal extinction right now? After all, many non-human animals are at much higher risk of going extinct than humans today.

You suggest that we humans could – if things go well – survive for billions or even trillions of years; since we only diverged from the last common ancestor with chimpanzees some four to 13 million years ago, that would put us at a very young age relatively. But if those are the timescales we consider, how about the potential in all the other species? It only took us humans some millions of years to go from apes to what we are today, after all. Who knows where the western black rhinoceros would be in a billion years if we hadn't killed all of them? Maybe we should worry about orangutan extinction at least half as much as we worry about human extinction?

Put differently, it's my impression – but I could well be wrong – that EAs focus on animal suffering & human extinction quite a bit, but not so much on non-human extinction. Is there merit to that question? If so, why? Has it been discussed anywhere? (A cursory search brought up very little, but I didn't try too hard.)

Replies from: Erich_Grunewald
comment by Erich_Grunewald · 2021-03-22T19:44:46.665Z · EA(p) · GW(p)

I did find this post [EA · GW] which sort of touches on the same question.

Replies from: jackmalde
comment by jackmalde · 2021-03-22T21:59:53.329Z · EA(p) · GW(p)

Thank you for raising non-human animals. I believe that longtermists don't talk about non-human animals enough. That is one reason I wrote that post that you have linked to.

In the post I actually argue that non-human animal extinction would be good. This is because it isn't at all clear that non-human animals live good lives. Even if some or many of them do live good lives, if they go extinct we can simply replace them with more humans which seems preferable because humans probably have higher capacity for welfare and are less prone to being exploited (I'm assuming here that there is no/little value of having species diversity). There are realistic possibilities of terrible animal suffering occuring in the future, and possibly even getting locked-in to some extent, so I think non-human animal extinction would be a good thing. 

Similarly (from a longtermist point of view) who really cares if orangutans go extinct? The space they inhabit could just be taken over by a different species. The reason why longtermists really care if humans go extinct is not down to speciesism, but because humans really do have the potential to make an amazing future. We could spread to the stars. We could enhance ourselves to experience amazing lives beyond what we can now imagine. We may be able to solve wild animal suffering. Also, to return to my original point, we tend to have good lives (at least this is what most people think). These arguments don't necessarily hold for other species that are far less intelligent than humans and so are, in my humble opinion, mainly a liability from a longtermist's point of view.

Replies from: Erich_Grunewald
comment by Erich_Grunewald · 2021-03-22T22:58:01.411Z · EA(p) · GW(p)

In the post I actually argue that non-human animal extinction would be good. This is because it isn't at all clear that non-human animals live good lives.

Good for whom? Obviously humans' lives seem good to humans, but it could well be that orangutans' lives are just as good & important to orangutans as our lives are to us. Pigs apparently love to root around in straw; that doesn't seem too enticing to me, but it is probably orgasmic fun for pigs!

(This is where I ought to point out that I'm not a utilitarian or even a consequentialist, so if we disagree, that's probably why.)

Obviously animals' lives in factory farms are brutal & may not be worth living, but that is not a natural or necessary condition -- it's that way only because we make it so. It seems unfair to make a group's existence miserable & then to make them go extinct because they are so miserable!

Even if some or many of them do live good lives, if they go extinct we can simply replace them with more humans which seems preferable because humans probably have higher capacity for welfare and are less prone to being exploited (I'm assuming here that there is no/little value of having species diversity). There are realistic possibilities of terrible animal suffering occuring in the future, and possibly even getting locked-in to some extent, so I think non-human animal extinction would be a good thing.

That humans have a higher capacity of welfare seems questionable to me, but I guess we'd have to define well-being before proceeding. Why do you think so? Is it because we are more intelligent & therefore have access to "higher" pleasures?

Similarly (from a longtermist point of view) who really cares if orangutans go extinct?

I guess it's important here to distinguish between organgutans as in the orangutan species & orangutans as in the members of that species. I'm not sure we should care about species per se. But we should care about individual orangutans, & it seems plausible to me that they care whether they go extinct. Large parts of their lives are after all centered around finding mates & producing offspring. So to the extent that anything is important to them (& I would argue that things can be just as important to them as they can be to us), surely the continuation of their species/bloodline is.

The space they inhabit could just be taken over by a different species. The reason why longtermists really care if humans go extinct is not because of speciesism, but because humans really do have the potential to make an amazing future. We could spread to the stars. We could enhance ourselves to experience amazing lives beyond what we can now imagine. We may be able to solve wild animal suffering. Also, to return to my original point, we tend to have good lives (at least this is what most people think). These arguments don't necessarily hold for other species that are far less intelligent than humans and so are, in my humble opinion, mainly a liability from a longtermist's point of view.

Most of that sounds like a great future for humans. Of course if you optimise for the kind of future that is good for humans, you'll find that human extinction seems much worse than extinction of other species. But maybe there's an equally great future that we can imagine for orangutans (equally great for the orangutans, that is, although I don't think they are actually commensurable), full of juicy fruit & sturdy branches. If so, shouldn't we try to bring that about, too?

We may be able to solve wild animal suffering & that'd be great. I could see an "argument from stewardship" where humans are the species most likely to be able to realise the good for all species. (Though I'll note that up until now we seem rather to have made life quite miserable for many of the other animals.)

Replies from: jackmalde, antimonyanthony
comment by jackmalde · 2021-03-22T23:40:23.760Z · EA(p) · GW(p)

This is where I ought to point out that I'm not a utilitarian or even a consequentialist, so if we disagree, that's probably why.

Yes I would say that I am a consequentialist and, more specifically a utilitarian, so that may be doing a lot of work in determining where we disagree. 

That humans have a higher capacity of welfare seems questionable to me, but I guess we'd have to define well-being before proceeding. Why do you think so? Is it because we are more intelligent & therefore have access to "higher" pleasures?

I do have a strong intuition that humans are simply more capable of having wonderful lives than other species, and this is probably down to higher intelligence. Therefore, given that I see no intrinsic value and little instrumental value in species diversity, if I could play god I would just make loads of humans (assuming total utilitarianism is true). I could possibly be wrong that humans are more capable of wonderful lives though.

It seems unfair to make a group's existence miserable & then to make them go extinct because they are so miserable!

Life is not fair. The simple point is that non-human animals are very prone to exploitation (factory farming is case in point). There are risks of astronomical suffering that could be locked in in the future. I just don't think it's worth the risk so, as a utilitarian, it just makes sense to me to have humans over chickens. You could argue getting rid of all humans gets rid of exploitation too, but ultimately I do think maximising welfare just means having loads of humans so I lean towards being averse to human extinction.

But we should care about individual orangutans, & it seems plausible to me that they care whether they go extinct.

Absolutely I care about orangutans and the death of orangutans that are living good lives is a bad thing. I was just making the point that if one puts their longtermist hat on these deaths are very insignificant compared to other issues (in reality I have some moral uncertainty and so would wear my shortermist cap too, making me want to save an orangutan if it was easy to do so).

Most of that sounds like a great future for humans.

Yes indeed. My utilitarian philosophy doesn't care that we would have loads of humans and no non-human animals. Again, this is justified due to lower risks of exploitation for humans and (possibly) greater capacities for welfare. I just want to maximise welfare and I don't care who or what holds that welfare.

Replies from: Erich_Grunewald
comment by Erich_Grunewald · 2021-03-23T19:46:28.646Z · EA(p) · GW(p)

I do have a strong intuition that humans are simply more capable of having wonderful lives than other species, and this is probably down to higher intelligence. Therefore, given that I see no intrinsic value and little instrumental value in species diversity, if I could play god I would just make loads of humans (assuming total utilitarianism is true). I could possibly be wrong that humans are more capable of wonderful lives though.

I'd be skeptical of that for a few reasons: (1) I think different things are good for different species due to their different natures/capacities (the good here being whatever it is that wonderful lives have a lot of), e.g. contemplation is good for humans but not pigs & rooting around in straw is good for pigs but not humans; (2) I think it doesn't make sense to compare these goods across species, because it means different species have different standards for goodness; & (3) I think it is almost nonsensical to ask, say, whether it would be better for a pig to be a human, or for a human to be a dog. But I recognise that these arguments aren't particularly tractable for a utilitarian!

Life is not fair. The simple point is that non-human animals are very prone to exploitation (factory farming is case in point). There are risks of astronomical suffering that could be locked in in the future. I just don't think it's worth the risk so, as a utilitarian, it just makes sense to me to have humans over chickens. You could argue getting rid of all humans gets rid of exploitation too, but ultimately I do think maximising welfare just means having loads of humans so I lean towards being averse to human extinction.

That life is not fair in the sense that different people (or animals) are dealt different cards, so to put it, is true -- the cosmos is indifferent. But moral agents can be fair (in the sense of just), & in this case it's not Life making those groups' existence miserable, it's moral agents who are doing that.

I think I would agree with you on the prone-to-exploitation argument if I were a utility maximiser, with the possible objection that, if humans reach the level of wisdom & technology needed to humanely euthanise a species in order to reduce suffering, possibly they would also be wise & capable enough to implement safeguards against future exploitation of that species instead. But that is still not a good enough reason if one believes that humans have higher capacity as receptacles of utility, though. If I were a utilitarian who believed that, then I think I would agree with you (without having thought about it too much).

Absolutely I care about orangutans and the death of orangutans that are living good lives is a bad thing. I was just making the point that if one puts their longtermist hat on these deaths are very insignificant compared to other issues (in reality I have some moral uncertainty and so would wear my shortermist cap too, making me want to save an orangutan if it was easy to do so).

Got it. I guess my original uncertainty (& this is not something I thought a lot about at all, so bear with me here) was whether longtermist considerations shouldn't cause us to worry about orangutan extinction risks, too, given that orangutans are not so dissimilar from what we were some few millions of years ago. So that in a very distant future they might have the potential to be something like human, or more? That depends a bit on how rare a thing human evolution was, which I don't know.

Yes indeed. My utilitarian philosophy doesn't care that we would have loads of humans and no non-human animals. Again, this is justified due to lower risks of exploitation for humans and (possibly) greater capacities for welfare. I just want to maximise welfare and I don't care who or what holds that welfare.

By the way, I should mention that I think your argument for species extinction is reasonable & I'm glad there's someone out there making it (especially given that I expect many people to react negatively towards it, just on an emotional level). If I thought that goodness was not necessarily tethered to beings for whom things can be good or bad, but on the contrary that it was some thing that just resides in sentient beings but can be independently observed, compared & summed up, well, then I might even agree with it.

Replies from: MichaelA, jackmalde
comment by MichaelA · 2021-04-19T18:39:44.782Z · EA(p) · GW(p)

I've only skimmed this thread, but I think you and Jack Malde both might find the following Forum wiki entries and some of the associated tagged posts interesting:

To state my own stance very briefly and with insufficient arguments and caveats:

  • I think it makes sense to focus on humans for many specific purposes, due to us currently being the only real "actors" or "moral agents" playing
  • I think it makes sense to think quite seriously about long-term effects on non-humans (including but not limited to nonhuman animals)
  • I think it might be the case that the best way to optimise those effects is to shepherd humans towards a long reflection [? · GW]
  • I think Jack is a bit overconfident about (a) the idea that the lives of nonhuman animals are currently net negative and (b) the idea that, if that's the case or substantially likely to be the case, that would mean the extinction of nonhuman animals would be a good thing
    • I say more about this in comments on the post of Jack's that you linked to
    • But I'm not sure this has major implications, since I think in any case the near-term effects we should care about most probably centre on human actions, human values, etc. (partly in order to have good long-term effects on non-humans)
Replies from: Erich_Grunewald
comment by Erich_Grunewald · 2021-04-19T19:12:51.391Z · EA(p) · GW(p)

Thanks Michael!

comment by jackmalde · 2021-03-23T20:08:39.635Z · EA(p) · GW(p)

I think it is almost nonsensical to ask, say, whether it would be better for a pig to be a human, or for a human to be a dog

To clarify I'm not asking that question. I class myself as a hedonistic utilitarian which just means that I want to maximise net positive over negative experiences. So I'm not saying that it would be better for a pig to be a human, just that if we were to replace a pig with a human we may increase total welfare (if the human has greater capacity for welfare than the pig). I agree that determining if humans have greater capacity for welfare than pigs isn't particularly tractable though - I too haven't really read up much on this.

whether longtermist considerations shouldn't cause us to worry about orangutan extinction risks, too, given that orangutans are not so dissimilar from what we were some few millions of years ago. So that in a very distant future they might have the potential to be something like human, or more?

That's an interesting possibility! I don't know enough biology to comment on the likelihood. 

I should mention that I think your argument for species extinction is reasonable & I'm glad there's someone out there making it

To be honest I'm actually quite unsure if we should be trying to make all non-human animals go extinct. I don't know how tractable that is or what the indirect effects would be. I'm saying, putting those considerations aside, that it would probably be good from a longtermist point of view.

The exception is of course factory-farmed animals. I do hope they go extinct and I support tangible efforts to achieve this e.g. plant-based and clean meat.

comment by antimonyanthony · 2021-03-23T23:58:19.317Z · EA(p) · GW(p)

But we should care about individual orangutans, & it seems plausible to me that they care whether they go extinct. Large parts of their lives are after all centered around finding mates & producing offspring. So to the extent that anything is important to them (& I would argue that things can be just as important to them as they can be to us), surely the continuation of their species/bloodline is.

I'm pretty skeptical of this claim. It's not evolutionarily surprising that orangutans (or humans!) would do stuff that decreases their probability of extinction, but this doesn't mean the individuals "care" about the continuation of their species per se. Seems we only have sufficient evidence to say they care about doing the sorts of things that tend to promote their own (and relatives', proportional to strength of relatedness) survival and reproductive success, no?

comment by niklas · 2021-03-25T10:47:28.217Z · EA(p) · GW(p)

really beautifully written, thank you

comment by MichaelA · 2021-04-19T18:31:28.560Z · EA(p) · GW(p)

Thanks, I thought this was really interesting and well-written. (Well, weirdly enough, I got nothing out of the quoted poems, but I found the rest of it quite poetic and moving!)

I especially liked this passage:

I imagine our descendants looking back at those few centuries, and seeing some set of humans, amidst much else calling for attention, lifting their gaze, crunching a few numbers, and recognizing the outlines of something truly strange and extraordinary — that somehow, they live at the very beginning, in the most ancient past; that something immense and incomprehensible and profoundly important is possible, and just starting, and in need of protection.

I imagine our descendants saying: “*Yes*. You can see it. Don’t look away. Don’t forget. Don’t mess up. The pieces are all there. Go slow. Be careful. It’s really possible.” I imagine them looking back through time at their distant ancestors, and seeing some of those ancestors, looking forward through time, at them. I imagine eyes meeting.

I immediately went and quoted that in a comment on the post What quotes do you find most inspire you to use your resources (effectively) to help others?, as I expect I'll find that passage inspiring in future and that other people might do so as well.