Effects of anti-aging research on the long-term future

post by Matthew_Barnett · 2020-02-27T22:42:40.043Z · score: 44 (20 votes) · EA · GW · 26 comments

Contents

  Indirect effects
None
26 comments

In effective altruism, anti-aging research is usually discussed as an exclusively short-term human-focused cause area. Most existing discussion about anti-aging focus on the direct effects that may result if effective therapies are released: we get longer healthier lifespans.

However, it seems reasonable to think that profound structural changes would occur at all levels of society if aging were to be cured, especially if it happened before something else more transformative such as the creation of superintelligent AI. The effects would likely go beyond those mentioned in this prior piece [EA · GW], and I think that anything with potential for "profound social changes" merits some discussion on its own independent of the direct effects. Here I discuss both negative and positive aspects to anti-aging research, as even if anti-aging is negative, this still means we should think about it.

Indirect effects

Many effective altruists have focused their attention on electoral reform [EA · GW], governance [? · GW], economic growth [EA · GW], among other broad interventions to society. The usual justification for research of this kind is that there is a potential for large flow-through effects [? · GW] beyond the straightforward visible moral arguments.

I think this argument is reasonable, but I think that if you buy it, you should also think that anti-aging has been neglected. Even within short-term human focused cause areas, it is striking how little attention I've seen directed to anti-aging. For instance, comparing the search term "aging" to criminal justice reform (both conceived of as short-term human-focused cause areas) in Open Philanthropy's grants database reveals that aging research has captured $7,672,300 of donations compared to $108,555,216 for criminal justice reform.

Pablo Stafforini has proposed one explanation for this discrepancy,

Longevity research occupies an unstable position in the space of possible EA cause areas: it is very "hardcore" and "weird" on some dimensions, but not at all on others. The EAs in principle most receptive to the case for longevity research tend also to be those most willing to question the "common-sense" views that only humans, and present humans, matter morally. But, as you note, one needs to exclude animals and take a person-affecting view to derive the "obvious corollary that curing aging is our number one priority". As a consequence, such potential supporters of longevity research end up deprioritizing this cause area relative to less human-centric or more long-termist alternatives.

This explanation sounds right. However, it does seem clear to me that the long-term indirect effects of anti-aging would be large if the field met success any time soon. Therefore "weird" people can and should take this seriously. A success in anti-aging would likely mean

But maybe there's a good reason why even longtermists don't always seem to be interested in anti-aging? Another explanation is that people have long timelines for anti-aging, and have mostly concluded that it's not worth really thinking seriously about it right now. I actually agree that timelines are probably long, in the sense that I'm very skeptical of Aubrey de Grey's predictions of longevity escape velocity within 17 years. If you think that anti-aging timelines are long but AI timelines are short-to-medium, then I think it makes a lot of sense to focus on the latter.

But, it also seems that timelines for anti-aging could quite easily also be short if the field suddenly gains mainstream attention. Anti-aging proponents have historically given arguments for why they expect funding to pick up rapidly at some point. Eg. see what happens in Nick Bostrom's fable of the dragon tyrant, or Aubrey de Grey's predictions I quoted in this Metaculus question (and keep in mind the fact that at the time of writing, Metaculus thinks that there's a 75% chance of the question being resolved positively!). In a possible correspondence of these predictions, funding has increased considerably in the last 5 years, though the prospect of curing aging still remains distant in mainstream thought circles.

To illustrate one completely made up scenario for short timelines, consider the following:

For the first few decades of the 21st century, anti-aging remained strictly on the periphery of intellectual thought. Most people, including biologists, did not give much thought to the idea of developing biotechnology to repair molecular and cellular damage from natural aging, even though they understood that aging was a biological process that could in principle be reversed. Then, in the late 2020s, an unexpected success in senolytics, stem cell therapy among other combined treatments demonstrates a lab mouse that lived for many years longer than its natural lifespan. This Metaculus question resolves positively. Almost overnight the field is funded with multi-billion dollar grants to test the drug treatments on primates and eventually humans. While early results are not promising, in the mid 2030s a treatment is finally discovered that seems to work in humans and is predicted to reliably extend human lifespan by 5-10 years.
Then, anti-aging becomes a political issue. People realize the potential for this technology and don't want to die either by lack of access or waiting for it to be developed further. Politicians promise to give the treatment away for free and to put government money into researching better treatments, and economists concur since it would reduce healthcare costs. By the early 2040s, a comprehensive suite of treatments shows further promise and mainstream academics now think we are entering a life expectancy revolution.

Of course, my scenario is extremely speculative, but it's meant to illustrate the pace at which things can turn around.

Perhaps you still think that anti-aging is far away and there's not much we can do about it anyway. It's worth noting that this argument should equally apply to climate change, since the biggest effects of climate change are more than 50 years away and the field is neither neglected nor particularly tractable. And of course, direct research on biotechnology to defeat aging is much more neglected than climate change.

If you don't think EAs should be talking about anti-aging, due to timelines or whatever, you should at least be explicit in your reasoning.

Am I missing something?

26 comments

Comments sorted by top scores.

comment by willbradshaw · 2020-02-28T20:05:03.537Z · score: 31 (14 votes) · EA(p) · GW(p)

Thanks for this. As someone who worked in the ageing field and has been thinking about this for a while it's good to see more explicitly longtermist coverage of this cause area.

I've taken this as an opportunity to lay down some of my thoughts on the matter; this turned out to be quite long. I can expand and tidy this into a full post if people are interested, though it sounds like it would overlap somewhat with what Matthew's been working on. I haven't tried too hard to make this non-redundant with other comments, so apologies if you feel you've already covered something I discuss here.

TL;DR: I'm very uncertain about lots of things and think everyone else should be too; social-science research to address these uncertainties seems very valuable for both optimists and pessimists. That said, I'm still quite optimistic that life-extension will be net-positive from a long-termist perspective, assuming AI timelines are long.

Longevity, AI timelines, and high-level uncertainty

  • Obviously, most of these higher-order social effects (which I agree dominate the first-order welfare effects from a long-termist perspective, vast though those first-order effects are) depend heavily on your AI timelines. If timelines are fairly short there's not enough time for this to matter. Of course, that applies to most other things that aren't AI as well, so possibly those of us who aren't in AI should mostly focus on things that are mainly valuable if timelines are long.
  • Given that, I think a good heuristic for the potential value of ageing research is something like, "how likely is it that humans will still be making important decisions in 50-150 years?" The higher your probability of this being the case, and the longer you think it will stay the case, the more you should potentially care about anti-ageing research.
  • Life extension will almost certainly have big effects on society and institutions. Whether the net effect of these changes will be positive, negative, or near-neutral from a longtermist perspective seems very uncertain. I personally think that most of the very negative effects people worry about aren't very likely, but that's just my intuition; I don't have much formal data and neither does anyone else. Anders Sandberg has a paper on the effect of life extension on the length of dictatorships; if anyone knows of any other work in this area I'd love to hear about it.
  • Given the magnitudes of the proposed effects, and our uncertainty about them, the value-of-information of more social-sciences work in this area seems very high, both to try and getter idea of the magnitudes of some of these effects and, insofar as the worries about negative effects are justified, to try and come up with solutions that mitigate these problems in advance. This sort of work also seems less risky than directly trying to hasten life extension. I'd be excited about more EAs working on this, and about EAs funding this sort of work.
  • As someone with a background in the field, it seems quite likely to me that we will start seeing significant progress on the anti-ageing front this century, perhaps even in the next few decades, with or without buy-in from EA. On the other hand, very few people are seriously examining what effects this will have on society and culture. This further increases the value of social-science in this area; neglected as anti-ageing research itself is, this sort of meta-research is even more so.
  • That said, while I do think caution is warranted and meta-research in this area is highly valuable, I am optimistic about the social effects of life extension, and I'll discuss this a little below.

Longevity and technological progress

  • As discussed in this thread and elsewhere, it's very unclear how life extension would affect the rate of scientific and technological progress:
    • On the one hand, it takes many decades to produce a peak-productivity researcher, and life extension would allow that researcher to continue operating at peak productivity for a much longer time. IIRC most Nobel prizes are awarded for work done mid-career, and the average age at which the prizewinning work was done has been increasing over time; this is more or less what you'd expect to see as the body of scientific knowledge grew and key insights got harder to find. I've often been very impressed when interacting with older researchers (e.g. at conferences); despite their significant age-related impairments in fluid intelligence, my impression is that their deep knowledge and intuition in their field often lets them outperform young researchers in many domains. A big part of my optimism about life extension comes from the prospect of combining this deep expertise and wisdom with younglike fluid intelligence and learning.
    • On the other hand, there's the "science advances one funeral at a time" issue discussed with MichaelStJules elsewhere in this comment thread. I ran into a similar question in my PhD (which was in immunosenescence): how much of the decline in the ability of the adaptive immune system to respond to new threats is a result of ageing, as opposed to simply learning (and developing strong priors for) the previous immune environment? In the case of the immune system ageing is clearly playing a big role, but you would expect even a non-ageing system to become less flexible over time. Similarly here: I'd expect that much of the loss of mental flexibility we see with age is due to ageing, but I don't think we know how much is ageing and how much is simply learning. In the neural case there's the additional challenge of distinguishing learning from development: insofar as a 25-year-old learns less well than a 15-year-old, anti-ageing therapy isn't going to help you.
  • Even if life extension causes a slowdown in technological progress, however, it's far from obvious to me that this would be a bad thing from a longtermist perspective. If the Technological Completion Conjecture holds (and, as discussed elsewhere on this forum, it's fairly hard to imagine it not holding), a moderate delay in our rate of scientific/technological progress won't have much effect on the overall value of the future, as long as technological progress doesn't stop. And slower progress would give us us more time to fend off anthropogenic existential risks, which seems like a very good thing.
  • A more important consideration than the effect of life extension on the speed of research progress might be changes in the kind of research that gets done: what kind of questions get asked, how rigorously they are answered, and how much care is taken not to create accidental risks. I don't have much of an idea about how life extension will affect this at present, but it seems worth looking into. As one example, it's my impression that older scientists tend to be less interested in open science, which could plausibly be good or bad depending on your attitudes to various things.

Social/cultural effects

  • Apart from technological stagnation, the other common worry people raise about life extension is cultural stagnation: entrenchment of inequality, extension of authoritarian regimes, aborted social/moral progress, et cetera. I'm less sanguine about this than I am about a slowdown in research; there's no equivalent of the Technological Completion Conjecture in culture, and large shocks could conceivably lead to lasting cultural damage. That said, I'm still tentatively optimistic, at least for countries with reasonably good institutions.
  • As far as entrenchment of inequality is concerned, it depends a lot on what an anti-ageing treatment would look like: is it a drug or set of drugs that are very expensive to develop but much cheaper to manufacture, or is it a laborious customised treatment you'd need a personal physician to administer? If life extension looks more like the former, then I'm quite optimistic; even if the price was initially prohibitive, I'd expect it to become much more widely available fairly quickly. Even if governments don't intervene to make it so, insurers would be foolish not to fund an effective life extension treatment for their subscribers. On the other hand, if anti-ageing treatments looked more like the latter scenario, then we'd have to get much richer as a culture before they could be made widely available, which might lead to significant entrenchment of inequality. Nevertheless this doesn't seem like the most important aspect of this question to me: I'd be surprised if it outweighed even the first-order benefits of life extension in the longer run.
  • Regarding immortal dictators, Anders Sandberg has a paper finding that past dictatorships would only have been slightly lengthened by life extension on average. If that's true this doesn't seem like too much of a worry in most cases. On the other hand one might be mainly worried about tail risks: one unusually effective dictator could beat the odds and survive for a very long time, causing enormous damage. On the other other hand, it's not clear what effect longer lifespans (or the promise of such) would have on people's tolerance of authoritarianism.
  • Regarding moral/cultural progress, there do seem to be significant concerns here, though it's still far from clear to me that we should expect the net effect to be bad. From a longtermist perspective, it seems like certain kinds of conservatism – those born from experience of the value of good institutions, Chesterton's Fence, etc. – would actually be quite valuable, and would be expected to increase after life extension. And we have seen significant recent swings in attitudes among older people in some moral domains, such as the perception of gay people. In many respects this goes back to the uncertainties around age-related changes in neural agility discussed in the last section: given that chronologically older people would be biologically much younger, how should we expect this to affect their moral and cultural viewpoints?
  • Politically, dramatically increased lifespans should give people much stronger personal incentives to care about the long-term future, and while I wouldn't expect them to act perfectly rationally with regard to those incentives, I would expect them to make at least some adjustments that would be seen as positive from a longtermist perspective. This would hopefully also be reflected in the kinds of politicians that get elected and the kinds of institutions they support.
  • Speaking of politicians, it seems important to note that most of the important decision-makers in modern society – politicians, CEOs, senior officials – are middle-aged or older, and therefore operating with substantial cognitive impairments as a result of ageing. This is compensated for to some extent by improved judgement/wisdom acquired from experience, but still seems likely to cause problems in many domains. Given decent institutions, I'm quite optimistic about life extension improving the quality of thinking, and therefore of decisions, made by people in power, especially as I'd expect the median age (and hence experience) of these decision-makers to continue to increase.
  • Insofar as one believes ageing might have serious negative effects on culture, these will be highly dependent on the nature and quality of social institutions. Strong democratic institutions seem like they can withstand any negative side-effects of life-extension quite well with relatively minor adjustments (e.g. term limits for a wider range of official and academic positions); insofar as greater adjustments are needed it seems valuable to try to identify these in advance through social-science research in this domain. I am more worried about countries with bad institutions and authoritarian regimes, as these seem like they might be both more vulnerable to bad social effects of life extension and less likely to implement any fixes we discover. Here as elsewhere, though, I'm very very uncertain, and would value becoming less so.
comment by Matthew_Barnett · 2020-02-28T21:21:28.775Z · score: 9 (5 votes) · EA(p) · GW(p)

Thanks for the bullet points and thoughtful inquiry!

I've taken this as an opportunity to lay down some of my thoughts on the matter; this turned out to be quite long. I can expand and tidy this into a full post if people are interested, though it sounds like it would overlap somewhat with what Matthew's been working on.

I am very interested in a full post, as right now I think this area is quite neglected and important groundwork can be completed.

My guess is that most people who think about the effects of anti-aging research don't think very seriously about it because they are either trying to come up with reasons to instantly dismiss it, or come up with reasons to instantly dismiss objections to it. As a result, most of the "results" we have about what would happen in a post-aging world come from two sides of a very polarized arena. This is not healthy epistemologically.

In wild animal suffering research, most people assume that there are only two possible interventions: destroy nature, or preserve nature. This sort of binary thinking infects discussions about wild animal suffering, as it prevents people from thinking seriously about the vast array of possible interventions that could make wild animal lives better. I think the same is true for anti-aging research.

Most people I've talked to seem to think that there's only two positions you can take on anti-aging: we should throw our whole support behind medical biogerontology, or we should abandon it entirely and focus on other cause areas. This is crazy.

In reality, there are many ways that we can make a post-aging society better. If we correctly forecast the impacts to global inequality or whatever, and we'd prefer to have inequality go down in a post-aging world, then we can start talking about ways to mitigate such effects in the future. The idea that not talking about the issue or dismissing anti-aging is the best way to make these things go away is a super common reaction that I cannot understand.

Apart from technological stagnation, the other common worry people raise about life extension is cultural stagnation: entrenchment of inequality, extension of authoritarian regimes, aborted social/moral progress, et cetera.

I'm currently writing a post about this, because I see it as one of the most important variables affecting our evaluation of the long-term impact of anti-aging. I'll bring forward arguments both for and against what I see as "value drift" slowed by ending aging.

Overall, I see no clear arguments for either side, but I currently think that the "slower moral progress isn't that bad" position is more promising than it first appears. I'm actually really skeptical of many of the arguments that philosophers and laypeople have brought forward about the necessary function of moral progress brought about by generational death.

And as you mention, it's unclear why we should expect better value drift when we have an aging population, given that there is evidence that the aging process itself makes people more prejudiced and closed-minded in a number of ways.

comment by willbradshaw · 2020-02-28T21:40:53.368Z · score: 5 (4 votes) · EA(p) · GW(p)

Most people I've talked to seem to think that there's only two positions you can take on anti-aging: we should throw our whole support behind medical biogerontology, or we should abandon it entirely and focus on other cause areas. This is crazy.

I'm not sure it's all that crazy. EA is all about prioritisation. If something makes you believe that anti-ageing is 10% less promising as a cause area than you thought, that could lead you to cut your spending in that area by far more than 10% if it made other cause areas more promising.

I've spoken to a number of EAs who think anti-ageing research is a pretty cool cause area, but not competitive with top causes like AI and biosecurity. As long as there's something much more promising you could be working on it doesn't necessarily matter much how valuable you think anti-ageing is.

Now, some people will have sufficient comparative advantage that they should be working on ageing anyway: either directly or on the meta-level social-science questions surrounding it. But it's not clear to me exactly who those people are, at least for the direct side of things. Wetlab biologists and bioinformaticians could work on medical countermeasures for biosecurity. AI/ML people (who I expect to be very important to progress in anti-ageing) could work on AI safety (or biosecurity again). Social scientists could work on the social aspects of X-risk reduction, or on some other means of improving institutional decision-making. There's a lot competing with ageing for the attention of well-suited EAs.

I'm not saying ageing will inevitably lose out to all those alternatives; it's very neglected and (IMO) quite promising, and some people will just find it more interesting to work on than the alternatives. But I do generally back the idea of ruthless prioritisation.

comment by Matthew_Barnett · 2020-02-28T21:53:49.301Z · score: 1 (1 votes) · EA(p) · GW(p)

Right, I wasn't criticizing cause priortization. I was criticizing the binary attitude people had towards anti-aging. Imagine if people dismissed AI safety research because, "It would be fruitless to ban AI research. We shouldn't even try." That's what it often sounds like to me when people fail to think seriously about anti-aging research. They aren't even considering the idea that there are other things we could do.

comment by Aaron Gertler (aarongertler) · 2020-03-16T08:46:44.544Z · score: 3 (2 votes) · EA(p) · GW(p)

Belatedly, I'd also be very interested in seeing this become a full post!

comment by JimmyJ · 2020-02-27T23:29:12.028Z · score: 14 (8 votes) · EA(p) · GW(p)
I am writing a post on the effects of this one. If anyone is interested, I will try to finish

I'm interested.

comment by Pablo_Stafforini · 2020-02-28T02:29:46.830Z · score: 7 (4 votes) · EA(p) · GW(p)

I'm also interested.

Anders Sandberg discusses the issue a bit in one of his conversations with Rob Wiblin for the 80k Podcast.

comment by adamShimi · 2020-02-29T12:07:18.406Z · score: 1 (1 votes) · EA(p) · GW(p)

Also interested. I did not think about it before, but since the old generation dying is one way scientific and intellectual changes are completely accepted, that would probably have some big impact on our intellectual landscape and culture.

comment by Emanuele_Ascani · 2020-02-28T12:01:40.356Z · score: 8 (4 votes) · EA(p) · GW(p)

Thanks for this post, strongly upvoted. The amount of attention (and funding) aging research gets within EA is unbelievably low. That's why I wrote an entire series of posts on this cause-area. A couple of comments:

1) Remember: if a charity finances aging research, it has the effect of hastening it, not enabling it. Aging will be brought under medical control at some point, we are only able to influence when. This translates into the main impact factor of hastening the arrival of Longevity Escape Velocity.

2) Now look again at your bulleted list of "big" indirect effects, and remember that you can only hasten them, not enable them. To me, this consideration make the impact we can have on them seem no more than a rounding error if compared to the impact we can have due to LEV (each year you bring LEV closer by saves 36,500,000 lives of 1000QALYS. This is a conservative estimate I made here [EA · GW].)

Small correction: Aubrey de Grey only estimates a 50/50 chance of LEV within 17 years. This is also conditional on funding, because before the private money started to pour in five years ago, his estimate had been stuck for many years at 50/50 chance of LEV within 20-22 years.

comment by Matthew_Barnett · 2020-02-28T21:29:52.059Z · score: 4 (4 votes) · EA(p) · GW(p)
Now look again at your bulleted list of "big" indirect effects, and remember that you can only hasten them, not enable them. To me, this consideration make the impact we can have on them seem no more than a rounding error if compared to the impact we can have due to LEV (each year you bring LEV closer by saves 36,500,000 lives of 1000QALYS. This is a conservative estimate I made here [EA · GW].)

This isn't clear to me. In Hilary Greaves and William MacAskill's paper on strong longtermism, they argue that unless what we do now impacts a critical lock-in period, then most of the stuff we do now will "wash out" and have a low impact on the future.

If a lock-in period never comes, then there's no compelling reason to focus on indirect effects of anti-aging, and therefore I'd agree with you that these effects are small. However, if there is a lock-in period, then the actual lives saved from ending aging could be tiny compared to the lasting billion year impact that shifting to a post-aging society lead to.

What a strong long-termist should mainly care about are these indirect effects, not merely the lives saved.

comment by MichaelStJules · 2020-02-27T23:32:03.675Z · score: 6 (4 votes) · EA(p) · GW(p)

Do Long-Lived Scientists Hold Back Their Disciplines? [EA · GW] It's not clear reducing cognitive decline can make up for this or the effects of people becoming more set in their ways over time; you might need relatively more "blank slates".

Similarly, a lot of moral progress is made because of people with wrong views dying. People living longer will slow this trend, and, in the worst case, could lead to suboptimal value lock-in from advanced AI or other decisions that affect the long-term future.

I think speciesism is one of the most individually harmful and most widespread prejudices, and we need a relatively larger percentage of the population to have grown up eating plant-based and cultured animal products to reduce speciesism, since animal product consumption seems to cause speciesism (and not just speciesism causing animal product consumption). For the long-term future, antispeciesism may translate to concern for artificial sentience, from which most of the value in the future might come. Of course, there are also probably more direct effects on concern for artificial sentience like this unrelated to speciesism.

comment by Matthew_Barnett · 2020-02-27T23:44:18.601Z · score: 2 (2 votes) · EA(p) · GW(p)
Do Long-Lived Scientists Hold Back Their Disciplines? [EA · GW]It's not clear reducing cognitive decline can make up for this or the effects of people becoming more set in their ways over time; you might need relatively more "blank slates".

In addition to what I wrote here [EA(p) · GW(p)], I'm also just skeptical that scientific progress decelerating in a few respects is actually that big of a deal. The biggest case where it would probably matter is if medical doctors themselves had incorrect theories, or engineers (such as AI developers) were using outdated ideas. In the first case, it would be ironic to avoid curing aging to prevent medical doctors from using bad theories. In the second, I would have to do more research, but I'm still leaning skeptical.

Similarly, a lot of moral progress is made because of people with wrong views dying. People living longer will slow this trend, and, in the worst case, could lead to suboptimal value lock-in from advanced AI or other decisions that affect the long-term future.

I have another post in the works right now and I actually take the opposite perspective. I won't argue it fully here, but I don't actually believe the thesis that humanity makes consistent moral progress due to the natural cycle of birth and death. There are many cognitive biases that make us think that we do though (such as the fact that most people who say this are young and disagree with the elder, but when you are old you will disagree with the young. Who's correct?)

comment by MichaelStJules · 2020-02-28T00:00:47.817Z · score: 2 (2 votes) · EA(p) · GW(p)
such as the fact that most people who say this are young and disagree with the elder, but when you are old you will disagree with the young. Who's correct?

I think newer generations will tend to grow up with better views than older ones (although older generations could have better views than younger ones at any moment, because they're more informed), since younger generations can inspect and question the views of their elders, alternative views, and the reasons for and against with less bias and attachment to them. Curing aging doesn't cure confirmation bias or belief perseverance/the backfire effect.

comment by Matthew_Barnett · 2020-02-28T00:10:54.938Z · score: 1 (1 votes) · EA(p) · GW(p)
I think newer generations will tend to grow up with better views than older ones (although older generations could have better views than younger ones at any moment, because they're more informed), since younger generations can inspect and question the views of their elders, alternative views, and the reasons for and against with less bias and attachment to them.

This view assumes that moral progress is a real thing, rather than just an illusion. I can personally understand this of view if the younger generations shared the same terminal values, and merely refined instrumental values or became better at discovering logical consistencies or something. However, it also seems likely that moral progress can be described as moral drift.

Personally, I'm a moral anti-realist. Morals are more like preferences and desires than science. Each generation has preferences, and the next generation has slightly different preferences. When you put it that way, the idea of fundamentally better preferences doesn't quite make sense to me.

More concretely, we could imagine several ways that future generations disagree with us (and I'm assuming a suffering reduction perspective here, as I have identified you as among that crowd):

  • Future generations could see more value in deep ecology and preserving nature.
  • They could see more value in making nature simulations.
  • They could see less value in ensuring that robots have legally protected rights, since that's a staple of early 21st century fiction and future generations who grew up with robot servants might not really see it as valuable.

I'm not trying to say that these are particularly likely things, but it would seem strange to put full faith in a consistent direction of moral progress, when nearly every generation before us has experienced the opposite ie. take any generation from prior centuries and they would hate what we value these days. The same will probably be true for you too.

comment by MichaelStJules · 2020-02-28T01:16:28.654Z · score: 1 (1 votes) · EA(p) · GW(p)

I'm a moral anti-realist, too. You can still be a moral anti-realist and believe that your own ethical views have improved in some sense, although I suppose you'll never believe that they're worse now than before, since you wouldn't hold them if that were the case. Some think of it as what you would endorse if you were less biased, had more information and reflected more. I think my views are better now because they're more informed, but it's a possibility that I could have been so biased in dealing with new information that my views are in fact worse now than before.

In the same way, I think the views of future generations can end up better than my views will ever be.

More concretely, we could imagine several ways that future generations disagree with us (and I'm assuming a suffering reduction perspective here, as I have identified you as among that crowd):

So I don't expect such views to be very common over the very long-term (unless there are more obstacles to having different views in the future), because I can't imagine there being good (non-arbitrary) reasons for those views (except the 2nd, and also the 3rd if future robots turn out to not be conscious) and there are good reasons against them. However, this could, in principle, turn out to be wrong, and an idealized version of myself might have to endorse these views or at least give them more weight.

I think where idealized versions of myself and idealized versions of future generations will disagree is due to different weights given to opposing reasons, since there is no objective way to weight them. My own weights may be "biases" determined by my earlier experiences with ethics, other life experiences, genetic predisposition, etc., and maybe some weights could be more objective than others based on how they were produced, but without this history, no weights can be more objective than others.

Finally, just in practice, I think my views are more aligned with those of younger generations and generations to come, so views more similar to my own will be relatively more prominent if we don't cure aging (soon), which is a reason against curing aging (soon), at least for me.

comment by Matthew_Barnett · 2020-02-28T01:33:30.619Z · score: 2 (2 votes) · EA(p) · GW(p)
I'm a moral anti-realist, too. You can still be a moral anti-realist and believe that your own ethical views have improved in some sense

Sure. There are a number of versions of moral anti-realism. It makes sense for some people to think that moral progress is a real thing. My own version of ethics says that morality doesn't run that deep and that personal preferences are pretty arbitrary (though I do agree with some reflection).

In the same way, I think the views of future generations can end up better than my views will ever be.

Again, that makes sense. I personally don't really share the same optimism as you.

So I don't expect such views to be very common over the very long-term

One of the frameworks I propose in my essay that I'm writing is the perspective of value fragility. Across many independent axes, there are many more ways that your values can get worse than better. This is clear in the case of giving an artificial intelligence some utility function, but it could also (more weakly) be the case in deferring to future generations.

You point to idealized values. My hypothesis is that allowing everyone who currently lives to die and putting future generations in control is not a reliable idealization process. There are many ways that I am OK with deferring my values to someone else, but I don't really understand how generational death is one of those.

By contrast, there are a multitude of human biases that make people have more rosy views about future generations than seems (to me) warranted by the evidence:

  • Status quo bias. People dying and leaving stuff to the next generations has been the natural process for millions of years. Why should we stop it now?
  • The relative values fallacy. This goes something like, "We can see that the historical trend is for values to get more normal over time. Each generation has gotten more like us. Therefore future generations will be even more like us, and they'll care about all the things I care about."
  • Failure to appreciate diversity of future outcomes. Robin Hanson talks about how people use a far-view when talking about the future, which means that they ignore small details and tend to focus on one really broad abstract element that they expect to show up. In practice this means that people will assume that because future generations will likely share our values across one axis (in your case, care for farm animals) that they will also share our values across all axes.
  • Belief in the moral arc of the universe. Moral arcs play a large role in human psychology. Religions display them prominently in the idea of apocalypses where evil is defeated in the end. Philosophers have believed in a moral arc, and since many of the supposed moral arcs contradict each other, it's probably not a real thing. This is related to the just-world fallacy where you imagine how awful it would be that future generations could actually be so horrible, so you just sort of pretend that bad outcomes aren't possible.

I personally think that the moral circle expansion hypothesis is highly important as a counterargument, and I want more people to study this. I am very worried that people assume that moral progress will just happen automatically, almost like a spiritual force, because well... the biases I gave above.

Finally, just in practice, I think my views are more aligned with those of younger generations and generations to come

This makes sense if you are referring to the current generation, but I don't see how you can possibly be aligned with future generations that don't exist yet?

comment by MichaelStJules · 2020-02-28T03:13:48.111Z · score: 1 (1 votes) · EA(p) · GW(p)
One of the frameworks I propose in my essay that I'm writing is the perspective of value fragility. Across any individual axis, there are many more ways that your values can get worse than better.

There are more ways, yes, but I think they're individually much less likely than the ways in which they can get better, assuming they're somewhat guided by reflection and reason. This might still hold once you aggregate all the ways they can get worse and separate all the ways they can get better, but I'm not sure.

You point to idealized values. My hypothesis is that allowing everyone who currently lives to die and putting future generations in control is not a reliable idealization process.
This makes sense if you are referring to the current generation, but I don't see how you can possibly be aligned with future generations that don't exist yet?

I expect future generations, compared to people alive today, to be less religious, less speciesist, less prejudiced generally, more impartial, more consequentialist and more welfarist, because of my take on the relative persuasiveness of these views (and the removal of psychological obstacles to having these views), which I think partially explains the trends. No guarantee, of course, and there might be alternatives to these views that don't exist today but are even more persuasive, but maybe I should be persuaded by them, too.

I don't expect them to be more suffering-focused (beyond what's implied by the expectations above), though. Actually, if current EA views become very influential on future views, I might expect those in the future to be less suffering-focused and to cause s-risks, which is concerning to me. I think the asymmetry is relatively more common among people today than it is among EAs, specifically.

comment by Matthew_Barnett · 2020-02-28T03:39:44.074Z · score: 2 (2 votes) · EA(p) · GW(p)
There are more ways, yes, but I think they're individually much less likely than the ways in which they can get better, assuming they're somewhat guided by reflection and reason.

Again, I seem to have different views about to what extent moral views are driven by reflection and reason. For example, is the recent trend towards Trumpian populism driven by reflection and reason? (If you think this is not a new trend, then I ask you to point to previous politicians who share the values of the current administration).

I expect future generations, compared to people alive today, to be less religious

I agree with that.

less speciesist

This is also likely. However, I'm very worried about the idea that caring about farm animals doesn't imply an anti-speciesist mindset. Most vegans aren't concerned about wild animal suffering, and the primary justification that most vegans give for their veganism is from an exploitation framework (or environmentalist one) rather than a harm-reduction framework. This might not robustly transfer to future sentience.

less prejudiced generally, more impartial

This isn't clear to me. From this BBC article, "Psychologists used to believe that greater prejudice among older adults was due to the fact that older people grew up in less egalitarian times. In contrast to this view, we have gathered evidence that normal changes [ie. aging] to the brain in late adulthood can lead to greater prejudice among older adults." Furthermore, "prejudice" is pretty vague, and I think there are many ways that young people are prejudiced without even realizing it (though of course this applies to old people too).

more consequentialist, more welfarist

I don't really see why we should expect this personally. Could you point to some trends that show that humans have become more consequentialist over time? I tend to think that Hansonian moral drives are really hard to overcome.

because of my take on the relative persuasiveness of these views (and the removal of psychological obstacles to having these views)

The second reason is a good one (I agree that when people stop eating meat they'll care more about animals). The relative persuasiveness thing seems weak to me because I have a ton of moral views that I think are persuasive and yet don't seem to be adopted by the general population. Why would we expect this to change?

I don't expect them to be more suffering-focused (beyond what's implied by the expectations above), though. Actually, if current EA views become very influential on future views, I might expect those in the future to be less suffering-focused and to cause s-risks, which is concerning to me.

It sounds like you are not as optimistic as I thought you were. Out of all the arguments you gave, I think the argument from moral circle expansion is the most convincing. I'm less sold on the idea that moral progress is driven by reason and reflection.

I also have a strong prior against positive moral progress relative to any individual parochial moral view given what looks like positive historical evidence against that view (the communists of the early 20th century probably thought that everyone would adopt their perspective by now; same for Hitler, alcohol prohibitionists, and many other movements).

Overall, I think there are no easy answers here and I could easily be wrong.

comment by MichaelStJules · 2020-02-28T05:58:08.972Z · score: 1 (1 votes) · EA(p) · GW(p)
Again, I seem to have different views about to what extent moral views are driven by reflection and reason. For example, is the recent trend towards Trumpian populism driven by reflection and reason? (If you think this is not a new trend, then I ask you to point to previous politicians who share the values of the current administration).
(...)
The relative persuasiveness thing seems weak to me because I have a ton of moral views that I think are persuasive and yet don't seem to be adopted by the general population. Why would we expect this to change?

I don't really have a firm idea of the extent reflection and reason drives changes in or the formation of beliefs, I just think they have some effect. They might have disproportionate effects in a motivated minority of people who become very influential, but not necessarily primarily through advocacy. I think that's a good description of EA, actually. In particular, if EAs increase the development and adoption of plant-based and cultured animal products, people will become less speciesist because we're removing psychological barriers for them, and EAs are driven by reflection and reason, so these changes are in part indirectly driven by reflection and reason. Public intellectuals and experts in government can have influence, too.

Could the relatively pro-trade and pro-migration views of economists, based in part on reflection and reason, have led to more trade and migration, and caused us to be less xenophobic?

Minimally, I'll claim that, all else equal, if the reasons for one position are better than the reasons for another (and especially if there are good reasons for the first and none of the other), then the first position should gain more support in expectation.

I don't think short-term trends can usually be explained by reflection and reason, and I don't think Trumpian populism is caused by reflection and reason, but I think the general trend throughout history is away from such tribalistic views, and I think that there are basically no good reasons for tribalism might play a part, although not necessarily a big one.

This isn't clear to me. From this BBC article, "Psychologists used to believe that greater prejudice among older adults was due to the fact that older people grew up in less egalitarian times. In contrast to this view, we have gathered evidence that normal changes [ie. aging] to the brain in late adulthood can lead to greater prejudice among older adults."

That's a good point. However, is this only in social interactions (which, of course, can reinforce prejudice in those who would act on it in other ways)? What about when they vote?

We're talking maybe 20 years of prejudice inhibition lost at most on average, so at worst about a third of adults at any moment, but also a faster growing proportion of people growing up without any given prejudice they'd need to inhibit in the first place vs many extra people biased towards views they had possibly hundreds of years ago. The average age in both cases should trend towards half the life expectancy, assuming replacement birth rates.

I don't really see why we should expect this personally. Could you point to some trends that show that humans have become more consequentialist over time? I tend to think that Hansonian moral drives are really hard to overcome.

This judgement was more based on the arguments, not trends. That being said, I think social liberalism and social democracy are more welfarist, flexible, pragmatic and outcome-focused than most political views, and I think there's been a long-term trend towards them. Those further left are more concerned with exploitation and positive rights despite the consequences, and those further right are more concerned with responsibility, merit, property rights and rights to discriminate. Some of this might be driven by deference to experts and the views of economists, who seem more outcome-focused. This isn't something I've thought a lot about, though.

Maybe communists were more consequentialist (I don't know), but if they had been right empirically about the consequences, communism might be the norm today instead.

However, I'm very worried about the idea that caring about farm animals doesn't imply an anti-speciesist mindset. Most vegans aren't concerned about wild animal suffering, and the primary justification that most vegans give for their veganism is from an exploitation framework rather than a harm-reduction framework. This might not robustly transfer to future sentience.

I actually haven't gotten a strong impression that most ethical vegans are primarily concerned with exploitation rather than cruelty specifically, but they are probably primarily concerned with harms humans cause, rather than just harms generally that could be prevented. It doesn't imply antispeciesism or a transfer to future sentience, but I think it helps more than it hurts in expectation. In particular, I think it's very unlikely we'll care much about wild animals or future sentience that's no more intelligent than nonhuman animals if we wouldn't care more about farmed animals, so at least one psychological barrier is removed.

comment by DavidWeber · 2020-02-27T23:35:26.213Z · score: 2 (2 votes) · EA(p) · GW(p)

Eliminating aging also has the potential for strong negative long-term effects. Both of the ones I'm worried about are actually extensions of your point about eliminating long-term value drift. No aging enables autocrats to stay in power indefinitely, as it is often the uncertainty of their death that leads to the failure of their regimes. Given that billions worldwide currently live under autocratic or authoritarian governments, this is a very real concern.

Another potentially major downside is the stagnation of research. If Kuhn is to be believed, a large part of scientific progress comes not from individuals changing their minds, but from outdated paradigms being displaced by more effective ones. This one is less certain, as it's possible that knowing they have indefinite futures may lead to selection for people who are willing to change their minds. Both of these are cases where progress probably *requires* value drift.

comment by Matthew_Barnett · 2020-02-27T23:39:31.403Z · score: 5 (4 votes) · EA(p) · GW(p)
Eliminating aging also has the potential for strong negative long-term effects.

Agreed. One way you can frame what I'm saying is that I'm putting forward a neutral thesis: anti-aging could have big effects. I'm not necessarily saying they would be good (though personally I think they would be).

Even if you didn't want aging to be cured, it still seems worth thinking about it because if it were inevitable, then preparing for a future where aging is cured is better than not preparing.

Another potentially major downside is the stagnation of research. If Kuhn is to be believed, a large part of scientific progress comes not from individuals changing their minds, but from outdated paradigms being displaced by more effective ones.

I think this is real, and my understanding is that empirical research supports this. But the theories I have read also assume a normal aging process. It is quite probable that bad ideas stay alive mostly because the proponents are too old to change their mind. I know for a fact that researchers in their early 20s change their mind quite a lot, and so a cure to aging would also mean more of that.

comment by MichaelStJules · 2020-02-27T23:52:43.905Z · score: 2 (2 votes) · EA(p) · GW(p)
I know for a fact that researchers in their early 20s change their mind quite a lot, and so a cure to aging would also mean more of that.

As I wrote here [EA(p) · GW(p)], I think this could be due (in part) to biases accumulated by being in a field (and being alive) longer, not necessarily (just) brain aging. I'd guess that more neuroplasticity or neurogenesis is better than less, but I don't think it's the whole problem. You'd need people to lose strong connections, to "forget" more often.

Also, people's brains up until their mid 20s are still developing and pruning connections.

comment by Matthew_Barnett · 2020-02-27T23:57:25.090Z · score: 1 (1 votes) · EA(p) · GW(p)
I think this could be due (in part) to biases accumulated by being in a field (and being alive) longer, not necessarily (just) brain aging.

I'm not convinced there is actually that much of a difference between long-term crystallization of habits and natural aging. I'm not qualified to say this with any sort of confidence. It's also worth being cautious about confidently predicting the effects of something like this in either direction.

comment by Mati_Roy · 2020-05-08T10:15:11.895Z · score: 1 (1 votes) · EA(p) · GW(p)

related, so posting just as a reference: https://axiomaticdoubts.wordpress.com/2020/03/22/how-would-anti-aging-medicine-change-the-world/

comment by MichaelStJules · 2020-02-27T23:16:58.430Z · score: 1 (1 votes) · EA(p) · GW(p)

Will most of the (dis)value in the future come from nonbiological consciousness?

comment by Matthew_Barnett · 2020-02-27T23:23:00.601Z · score: 3 (3 votes) · EA(p) · GW(p)

If I had to predict, I would say that yes, ~70% chance that most suffering (or other disvalue you might think about) will exist in artificial systems rather than natural ones. It's not actually clear whether this particular fact is relevant. Like I said, the effects of curing aging extend beyond the direct effects on biological life. Studying anti-aging can be just like studying electoral reform, or climate change in this sense.