Not getting carried away with reducing extinction risk?

post by jackmalde · 2019-06-01T16:42:19.509Z · score: 9 (14 votes) · EA · GW · 20 comments

This is a question post.

Contents

  Answers
    9 Moses
    7 avacyn
    5 RavenclawPrefect
    1 AllAmericanBreakfast
    1 SiebeRozendal
None
5 comments

I get the sense that some in the EA community would solely focus on reducing extinction risk if they could have it their way. But is there a danger with such an extreme focus on reducing extinction risk that we end up successfully prolonging a world that may not even be desirable?

It seems at least slightly plausible that the immense suffering of wild animals could mean that the sum of utilities in the world is negative (please let me know if you find this to be a ludicrous claim).

If this is true, and if hypothetically things were to stay this way, it may not be the case that reducing extinction risk is doing the most good, even under a 'total utilitarian' population axiology.

Whilst I would like to see us flourish into the far future, I think we may have to focus on the 'flourish' part as well as the 'far future' part. It seems to me that reducing extinction risk may only be a worthwhile endeavour if it is done alongside other things such as eradicating wild animal suffering.

What do you think? Can solely focusing on extinction risk be doing the most good or do we need to do it in tandem with other things that actually make the world worth prolonging?

Answers

answer by Moses · 2019-06-01T17:51:33.672Z · score: 9 (9 votes) · EA · GW

If humanity wipes itself out, those wild animals are going to continue suffering forever.

If we only partially destroy civilization, we're going to set back the solution to problems like wild animal suffering until (and if) we rebuild civilization. (And in the meantime, we will suffer as our ancestors suffered).

If we nuke the entire planet down to bedrock or turn the universe into paperclips, that might be a better scenario than the first one in terms of suffering, but then all of the anthropic measure is confined to the past, where it suffers, and we're foregoing the creation of an immeasurably larger measure of extremely positive experiences to balance things out.

On the other hand, if we just manage to pass through the imminent bottleneck of potential destruction and emerge victorious on the other side—where we have solved coordination and AI—we will have the capacity to solve problems like wild animal suffering, global poverty, or climate change with a snap of our fingers, so to speak.

That is to say, problems like wild animal suffering will either be solved with trivial effort a few decades from now, or we will have much, much bigger problems. Either way—this is my personal view, not necessarily other "long-termists"—current work on these issues will be mostly in vain.

comment by SiebeRozendal · 2019-06-04T10:45:30.921Z · score: 5 (3 votes) · EA · GW
If humanity wipes itself out, those wild animals are going to continue suffering forever.

Not forever. Only until the planet becomes too hot to support complex life (<1 billion years from now). Giving that the universe can support life 1-100 trillion years, this is a relatively short amount of suffering compared to what could be.

And also only on our planet! Which is much less restricted than the suffering that can spread if humanity remains alive. (Although, as I write in my own answer, I don't think humanity would spread wild animals beyond the solar system.)

comment by jackmalde · 2019-06-04T09:37:05.401Z · score: 1 (1 votes) · EA · GW

Thanks for this. I do wonder about the prospect of 'solving' extinction risk. Do you think EAs who are proponents of reducing extinction risk now actually expect these risks to become sufficiently small so that moving focus onto something like animal suffering would ever be justified? I'm not convinced they do as extinction in their eyes is so catastrophically bad that any small reductions in probability would likely dominate other actions in terms of expected value. Do you think this is an incorrect characterisation?

comment by Moses · 2019-06-04T17:58:32.593Z · score: 1 (1 votes) · EA · GW

I'm going to speak for myself again:

I view our current situation as a fork in the road. Either very bad outcomes or very good ones. There is no slowing down. There is no scenario where we linger before the fork for decades or centuries.

As far as very bad outcomes, I'm not worried about extinction that much; dead people cannot suffer, at least. What I'm most concerned about is locking ourselves into a state of perpetual hell (e.g. undefeatable totalitarianism, or something like Christiano's first tale of doom, and then spreading that hell across the universe.

The very good outcomes would mean that we're recognizably beyond the point where bad things could happen; we've built a superintelligence, it's well-aligned, it's clear to everyone that there are no risks anymore. The superintelligence will prevent wars, pandemics, asteroids, supervolcanos, disease, death, poverty, suffering, you name it. There will be no such thing as "existential risk".

Of course, I'm keeping an eye on the developments and I'm ready to reconsider this position at any time; but right now this is the way I see the world.

answer by avacyn · 2019-06-04T20:50:31.303Z · score: 7 (3 votes) · EA · GW

Since most of the responders here are defending x-risk reduction, I wanted to chime in and say that I think your argument is far from ludicrous and is in-fact why I don't prioritize x-risk reduction, even as a total utilitarian.

The main reason it's difficult for me to be on board with pro-x-risk-reduction arguments is that much of it seems to rely on projections about what might happen in the future, which seems very prone to miss important considerations. For example, saying that WAS will be trivially easy to solve once we have an aligned AI, or saying that the future is more likely to be optimized for value rather than disvalue, both seem overconfident and speculative (even if you can give some plausible sounding arguments).

If I were more comfortable with projections about what will happen in the far future, I'm still not sure I would end up favoring x-risk reduction. Take AI x-risk: it's possible that we have a truly aligned AI, or that we have a paperclip maximizer, but it's also possible that we have a powerful general AI whose values are not as badly misaligned as a paperclip maximizer's, but that are somehow dependent on the values of its creators. In this scenario, it seems crucially important to speed up the improvement of humanity's values.

I agree with Moses in that I much prefer a scenario where everything in our light cone is turned into paperclips to one e.g. where humans are wiped out due to some deadly pathogen, but other life continues to exist here and wherever else in the universe. This doesn't necessarily mean that I favor biorisk reduction over AI risk reduction, since AI risk reduction also has the favorable effect of making a remarkable outcome (aligned AI) more likely. I don't know which one I'd favor more all things considered.

comment by jackmalde · 2019-06-05T16:48:34.246Z · score: 1 (1 votes) · EA · GW

Thanks :)

answer by RavenclawPrefect · 2019-06-01T18:45:32.466Z · score: 5 (3 votes) · EA · GW

If one doesn't have strong time discounting in favor of the present, the vast majority of the value that can be theoretically realized exists in the far future.

As a toy model, suppose the world is habitable for a billion years, but there is an extinction risk in 100 years which requires substantial effort to avert.

If resources are dedicated entirely to mitigating extinction risks, there is net -1 utility each year for 100 years but a 90% chance that the world can be at +5 utility every year afterwards once these resources are freed up for direct work. (In the extinction case, there is no more utility to be had by anyone.)

If resources are split between extinction risk and improving current subjective experience, there is net +2 utility each year for 100 years, and a 50% chance that the world survives to the positive longterm future state above. It's not hard to see that the former case has massively higher total utility, and remains such under almost any numbers in the model so long as we can expect billions of years of potential future good.

A model like this relies crucially on the idea that at some point we can stop diverting resources to global catastrophic risk, or at least do so less intensively, but I think this is an accurate assumption. We currently live in an unusually risk-prone world; it seems very plausible that pandemic risk, nuclear warfare, catastrophic climate change, unfriendly AGI, etc. are all safely dealt with in a few centuries if modern civilization endures long enough to keep working on them.

One's priorities can change over time as their marginal value shifts; ignoring other considerations for the moment doesn't preclude focusing on them once we've passed various x-risk hurdles.


comment by jackmalde · 2019-06-04T09:38:48.227Z · score: 1 (1 votes) · EA · GW

Thanks for this. I'd like to ask you the same question I'm asking others in this thread.

I do wonder about the prospect of 'solving' extinction risk. Do you think EAs who are proponents of reducing extinction risk now actually expect these risks to become sufficiently small so that moving focus onto something like animal suffering would ever be justified? I'm not convinced they do as extinction in their eyes is so catastrophically bad that any small reductions in probability would likely dominate other actions in terms of expected value. Do you think this is an incorrect characterisation?

comment by SiebeRozendal · 2019-06-04T11:00:31.573Z · score: 4 (3 votes) · EA · GW

I think EA's believe that this is definitely possible, most likely by the creation of an aligned superintelligence. That could reduce x-risk to infinitessimal levels, if there are no other intelligent actors that we could encounter. I think the general strategy could be summarized as 'reduce extinction risk as much as possible until we can safely build and deploy an aligned superintelligence, then let the superintelligence (dis)solve all other problems'.

After the creation of an aligned superintelligence, society's resources could focus on other problems. However, I think some people also think there are no other problems anymore once there is an aligned superintelligence: with superintelligence all the other problems like animal suffering are trivial to solve.

But most people - including myself - seem to not have given very much thought to what other problems might still exist in an era of superintelligence.

If you believe a strong version superintelligence is impossible this complicates the whole picture, but you'd at least have to include the consideration that in the future it is likely we have substantially higher (individual and/or collective) intelligence.

answer by AllAmericanBreakfast · 2019-06-13T14:02:43.167Z · score: 1 (1 votes) · EA · GW

This argument is called moral cluelessness.

https://80000hours.org/podcast/episodes/hilary-greaves-global-priorities-institute/

answer by SiebeRozendal · 2019-06-04T10:42:52.855Z · score: 1 (1 votes) · EA · GW

Hey Jack, I think this is a great question and I dedicate a portion of my MA philosophy thesis to this. Here are some general points:

  • It is likely that the expected moral value of the future is dominated by futures in which there is optimization for moral (dis)value. Since we would expect it to be much more likely there will be optimization for value than for disvalue, the expected moral value of the future seems positive (unless you adhere to strict/lexical negative utilitarianism). This claim depends on the difference between possible worlds that are optimized for (dis)value vs. states that are subject to other pressures, like competition: this difference is not obviously large (see Price of Anarchy).
  • There seems substantial convergence between improving the quality of the long-term future and reducing extinction risk. Things that can bring humanity to extinction (superintelligence, virus, nuclear winter, extreme climate change) can also very bad for the long-term future of humanity if they do not lead to extinction. Therefore, reducing extinction risk also has very positive effects on the quality of the long-term future. Potential suffering risk from misaligned AI is one candidate. In addition, I think global catastrophes, if they don't lead to extinction, create a negative trajectory change in expectation. Either because civilizational collapse puts us on a worse trajectory, but the most likely outcome is what I call general global disruption: civilization doesn't quite collapse, but think are shaken up a lot. From my thesis:
Should we expect global disruption to be (in expectation) good or bad for the value of the future? This is speculative, but Beckstead lays out some reasons to expect that global disruption will put humanity most certainly on a worse trajectory: it may reverse social progress, limit the ability to adequately regulate the development of dangerous technologies, open an opportunity for authoritarian regimes to take hold, or increase inter-state conflict (Beckstead, 2015). We can also approach the issue abstractly: disruption can be seen as injecting more noise into a previously more stable global system, increasing the probability that the world settles into a different semi-stable configuration. If there are many more undesirable configurations of the world than desirable ones, increasing randomness is more likely to lead to an undesirable state of the world. I am convinced that, unless we are currently in a particularly bad state of the world, global disruption would have a very negative effect (in expectation) on the value of the long-term future.
  • I find it unlikely that we would export wild-animal suffering beyond our solar system. It takes a lot of time to move to different solar systems, and I don't think future civilizations will require a lot of wilderness: it's a very inefficient use of resources. So I believe the amount of suffering is relatively small from that source. However, I think some competitive dynamics between digital beings could create astronomical amounts of suffering, and this could come about if we focus only on reducing extinction risk.
  • Whether you want to focus on the quality of the future also depends on your moral views. Some people weigh preventing future suffering much more heavily than enabling the creation of future happiness. For them, part of the value of reducing extinction risk is taken away, and they will have stronger reasons to focus on the quality of the future.
  • I found the post by Brauner & Grosse-Holz and the post by Beckstead most helpful. I know that Haydn Belfield (CSER) is currently working on a longer article about the long-term significance of reducing Global Catastrophic Risks.

In conclusion, I think reducing extinction risk is a very positive in terms of expected value, even if one expects the future to be negative! However, depending on different parameters, there might be better options than focusing on extinction risk. Candidates involve particular parts of moral circle expansion and suffering risks from AI.

I can send you the current draft of my thesis in case you're interested, and will post it online once I have finished it.

comment by Lukas_Finnveden · 2019-06-05T21:21:30.582Z · score: 4 (3 votes) · EA · GW
We can also approach the issue abstractly: disruption can be seen as injecting more noise into a previously more stable global system, increasing the probability that the world settles into a different semi-stable configuration. If there are many more undesirable configurations of the world than desirable ones, increasing randomness is more likely to lead to an undesirable state of the world. I am convinced that, unless we are currently in a particularly bad state of the world, global disruption would have a very negative effect (in expectation) on the value of the long-term future.

If there are many more undesirable configurations of the world than desirable ones, then we should, a priori, expect that our present configuration is an undesirable one. Also, if the only effect of disruption was to re-randomize the world order, then the only thing you'd need for disruption to be positive is for the current state to be worse than the average civilisation from the distribution. Maybe this is what you mean with "particularly bad state", but intuitively, I interpret that more like the bottom 15 %.

There are certainly arguments to make for our world being better than average. But I do think that you actually have to make those arguments, and that without them, this abstract model won't tell you if disruption is good or bad.

comment by SiebeRozendal · 2019-06-24T11:17:32.995Z · score: 1 (1 votes) · EA · GW

Hmm, I have not phrased my idea clearly, so thank you for your comment, because now I am improving my concepts :)

If there are many more undesirable configurations of the world than desirable ones, then we should, a priori, expect that our present configuration is an undesirable one.

I agree with this. But that does not imply that disruption would not have a negative effect on expectation.

I don't see disruption as 're-randomization' and picking any new configuration out of the space of all possible futures. Rather, I see disruption as a 'random departure' from a current state, and not each possible future is equally close to the current state. And because I expect there to almost always be more ways to go 'down' than 'up', I expect this random departure to be (highly) negative.

comment by Denkenberger · 2019-06-07T01:30:20.777Z · score: 4 (3 votes) · EA · GW
I find it unlikely that we would export wild-animal suffering beyond our solar system. It takes a lot of time to move to different solar systems, and I don't think future civilizations will require a lot of wilderness: it's a very inefficient use of resources. So I believe the amount of suffering is relatively small from that source. However, I think some competitive dynamics between digital beings could create astronomical amounts of suffering, and this could come about if we focus only on reducing extinction risk.

Agreed, but the other possibility is that there will be simulations of wild animals in the future. So I think spreading the meme that wild animals can suffer to the AI community could be valuable.

comment by jackmalde · 2019-06-05T16:48:59.190Z · score: 1 (1 votes) · EA · GW

Thanks! Would be very interesting to see you thesis once it is finished

20 comments

Comments sorted by top scores.

comment by Linch · 2019-06-01T23:09:58.676Z · score: 18 (8 votes) · EA · GW

I think this summarizes the core arguments for why focusing on extinction risk prevention is a good idea. https://www.effectivealtruism.org/articles/the-expected-value-of-extinction-risk-reduction-is-positive/



comment by jackmalde · 2019-06-04T09:35:18.087Z · score: 1 (1 votes) · EA · GW

Thanks for this will definitely give this a read

comment by rohinmshah · 2019-06-02T16:41:47.082Z · score: 14 (6 votes) · EA · GW

I'm pretty sure all the people you're thinking about won't make claims any stronger than "All of EA's resources should currently be focused on reducing extinction risks". Once extinction risks are sufficiently small, I would expect them to switch to focusing on flourishing.

comment by jackmalde · 2019-06-04T09:35:00.403Z · score: 1 (1 votes) · EA · GW

Thanks. I do wonder though if EAs who are proponents of reducing extinction risk now actually expect these risks to become sufficiently small so that moving focus onto something like animal suffering would ever be justified. I'm not convinced they do as extinction in their eyes is so catastrophically bad that any small reductions in probability would likely dominate other actions in terms of expected value. Do you think this is an incorrect characterisation?

comment by rohinmshah · 2019-06-04T20:37:11.612Z · score: 3 (2 votes) · EA · GW

Even with the astronomical waste argument, which is the most extreme version of this argument, at some point you have astronomical numbers of people living, and the rest of the future isn't tremendously large in comparison, and so focusing on flourishing at that point makes more sense. Of course, this would be quite far in the future.

In practice, I expect the bar comes well before that point, because if everyone is focusing on x-risks, it will become harder and harder to reduce x-risks further, while staying equally as easy to focus on flourishing.

Note that in practice many more people in the world focus on flourishing than on x-risks, so maybe the few long-term focused people might end up always prioritizing x-risks because everyone else picks the low-hanging fruit in flourishing. But that's different from saying "it's never important to work on animal suffering", it's saying "someone else will fix animal suffering, and so I should do the other important thing of reducing x-risk".