"Long-Termism" vs. "Existential Risk"

post by Scott Alexander · 2022-04-06T21:41:45.402Z · EA · GW · 72 comments

Contents

  In The Very Short Run, We're All Dead
  Does Long-Termism Ever Come Up With Different Conclusions Than Thoughtful Short-Termism?
  "Long-termism" vs. "existential risk"
None
70 comments

The phrase "long-termism" is occupying an increasing share of EA community "branding". For example, the Long-Term Future Fund, the FTX Future Fund ("we support ambitious projects to improve humanity's long-term prospects"), and the impending launch of What We Owe The Future ("making the case for long-termism").

Will MacAskill describes long-termism as:

I think this is an interesting philosophy, but I worry that in practical and branding situations it rarely adds value, and might subtract it.

In The Very Short Run, We're All Dead

AI alignment is a central example of a supposedly long-termist cause.

But Ajeya Cotra's Biological Anchors report [LW · GW] estimates a 10% chance of transformative AI by 2031, and a 50% chance by 2052. Others (eg Eliezer Yudkowsky [LW · GW]) think it might happen even sooner.

Let me rephrase this in a deliberately inflammatory way: if you're under ~50, unaligned AI might kill you and everyone you know. Not your great-great-(...)-great-grandchildren in the year 30,000 AD. Not even your children. You and everyone you know. As a pitch to get people to care about something, this is a pretty strong one. 

But right now, a lot of EA discussion about this goes through an argument that starts with "did you know you might want to assign your descendants in the year 30,000 AD exactly equal moral value to yourself? Did you know that maybe you should care about their problems exactly as much as you care about global warming and other problems happening today?" 

Regardless of whether these statements are true, or whether you could eventually convince someone of them, they're not the most efficient way to make people concerned about something which will also, in the short term, kill them and everyone they know.

The same argument applies to other long-termist priorities, like biosecurity and nuclear weapons. Well-known ideas like "the hinge of history", "the most important century" and "the precipice" all point to the idea that existential risk is concentrated in the relatively near future - probably before 2100. 

The average biosecurity project being funded by Long-Term Future Fund or FTX Future Fund is aimed at preventing pandemics in the next 10 or 30 years. The average nuclear containment project is aimed at preventing nuclear wars in the next 10 to 30 years. One reason all of these projects are good is that they will prevent humanity from being wiped out, leading to a flourishing long-term future. But another reason they're good is that if there's a pandemic or nuclear war 10 or 30 years from now, it might kill you and everyone you know.

Does Long-Termism Ever Come Up With Different Conclusions Than Thoughtful Short-Termism?

I think yes, but pretty rarely, in ways that rarely affect real practice.

Long-termism might be more willing to fund Progress Studies type projects that increase the rate of GDP growth by .01% per year in a way that compounds over many centuries.  "Value change" type work - gradually shifting civilizational values to those more in line with human flourishing - might fall into this category too.

In practice I rarely see long-termists working on these except when they have shorter-term effects. I think there's a sense that in the next 100 years, we'll either get a negative technological singularity which will end civilization, or a positive technological singularity which will solve all of our problems -  or at least profoundly change the way we think about things like "GDP growth". Most long-termists I see are trying to shape the progress and values landscape up until that singularity, in the hopes of affecting which way the singularity goes - which puts them on the same page as thoughtful short-termists planning for the next 100 years.

Long-termists might also rate x-risks differently from suffering alleviation. For example, suppose you could choose between saving 1 billion people from poverty (with certainty), or preventing a nuclear war that killed all 10 billion people (with probability 1%), and we assume that poverty is 10% as bad as death. A short-termist might be indifferent between these two causes, but a long-termist would consider the war prevention much more important, since they're thinking of all the future generations who would never be born if humanity was wiped out.

In practice, I think there's almost never an option to save 1 billion people from poverty with certainty. When I said that there was, that was a hack I had to put in there to make the math work out so that the short-termist would come to a different conclusion from the long-termist. A 1/1 million chance of preventing apocalypse is worth 7,000 lives, which takes $30 million with GiveWell style charities. But I don't think long-termists are actually asking for $30 million to make the apocalypse 0.0001% less likely - both because we can't reliably calculate numbers that low, and because if you had $30 million you could probably do much better than 0.0001%. So I'm skeptical that problems like this are likely to come up in real life.

When people allocate money to causes other than existential risk, I think it's more often as a sort of moral parliament maneuver, rather than because they calculated it out and the other cause is better in a way that would change if we considered the long-term future.

"Long-termism" vs. "existential risk"

Philosophers shouldn't be constrained by PR considerations. If they're actually long-termist, and that's what's motivating them, they should say so.

But when I'm talking to non-philosophers, I prefer an "existential risk" framework to a "long-termism" framework. The existential risk framework immediately identifies a compelling problem (you and everyone you know might die) without asking your listener to accept controversial philosophical assumptions. It forestalls attacks about how it's non-empathetic or politically incorrect not to prioritize various classes of people who are suffering now. And it focuses objections on the areas that are most important to clear up (is there really a high chance we're all going to die soon?) and not on tangential premises (are we sure that we know how our actions will affect the year 30,000 AD?)

I'm interested in hearing whether other people have different reasons for preferring the "long-termism" framework that I'm missing.

72 comments

Comments sorted by top scores.

comment by Geoffrey Miller (geoffreymiller) · 2022-04-06T22:48:02.110Z · EA(p) · GW(p)

I agree with Scott Alexander that when talking with most non-EA people, an X risk framework is more attention-grabbing, emotionally vivid, and urgency-inducing, partly due to negativity bias, and partly due to the familiarity of major anthropogenic X risks as portrayed in popular science fiction movies & TV series.

However, for people who already understand the huge importance of minimizing X risk, there's a risk of burnout, pessimism, fatalism, and paralysis, which can be alleviated by longtermism and more positive visions of desirable futures. This is especially important when current events seem all doom'n'gloom, when we might ask ourselves 'what about humanity is really worth saving?' or 'why should we really care about the long-term future, it it'll just be a bunch of self-replicating galaxy-colonizing AI drones that are no more similar to us than we are to late Permian proto-mammal cynodonts?'

In other words, we in EA need long-termism to stay cheerful, hopeful, and inspired about why we're so keen to minimize X risks and global catastrophic risks.

But we also need longtermism to broaden our appeal to the full range of personality types, political views, and religious views out there in the public.  My hunch as a psych professor is that there are lots of people who might respond better to longtermist positive visions than to X risk alarmism. It's an empirical question how common that is, but I think it's worth investigating.

Also, a significant % of humanity is already tacitly longtermist in the sense of believing in an infinite religious afterlife, and trying to act accordingly. Every Christian who takes their theology seriously & literally (i.e. believes in heaven and hell), and who prioritizes Christian righteousness over the 'temptations of this transient life', is doing longtermist thinking about the fate of their soul, and the souls of their loved ones.  They take Pascal's wager seriously; they live it every day. To such people, X risks aren't necessarily that frightening personally, because they already believe that 99.9999+% of sentient experience will come in the afterlife. Reaching the afterlife sooner rather than later might not matter much, given their way of thinking.

However, even the most fundamentalist Christians might be responsive to arguments that the total number of people we could create in the future -- who would all have save-able souls -- could vastly exceed the current number of Christians. So, more souls for heaven; the more the merrier. Anybody who takes a longtermist view of their individual soul might find it easier to take a longtermist view of the collective human future.

I understand that most EAs are atheists or agnostics, and will find such arguments bizarre. But if we don't take the views of religious people seriously, as part of the cultural landscape we're living in, we're not going to succeed in our public outreach, and we're going to alienate a lot of potential donors, politicians, and media influencers.

There's a particular danger in overemphasizing the more exotic transhumanist visions of the future, in alienating religious or political traditionalists.  For many Christians, Muslims, and conservatives, a post-human, post-singularity, AI-dominated future would not sound worth saving. Without any humane connection to their human social world as it is, they might prefer a swift nuclear Armageddon followed by heavenly bliss, to a godless, soulless machine world stretching ahead for billions of years.

EAs tend to score very highly on Openness to Experience. We love science fiction. We like to think about post-human futures being potentially much better than human futures. But it that becomes our dominant narrative, we will alienate the vast majority of current living humans, who score much lower on Openness. 

If we push the longtermist narrative to the general public, we better make the long-term future sound familiar enough to be worth fighting for.

Replies from: timunderwood, quinn, vascoamaralgrilo
comment by timunderwood · 2022-04-08T19:36:12.585Z · EA(p) · GW(p)

Based on my memory of how people thought while growing up in the church, I don't think increasing the number of saveable souls is something that makes sense for a Christian -- or even any sort of long termist utilitarian framework at all.

Ultimately god is in control of everything. Your actions are fundamentally about your own soul, and your own eternal future, and not about other people. Their fate is between them and God, and he who knows when each sparrow falls will not forget them.

Replies from: Justin Helps
comment by Justin Helps · 2022-04-14T15:58:09.065Z · EA(p) · GW(p)

I remember my father explicitly saying that he regretted not having more children because he's since learned that God wants us to create more souls for him. Didn't make sense to me even as a Christian at the time, but the idea is out there.

comment by quinn · 2022-06-20T12:16:08.308Z · EA(p) · GW(p)

In other words, we in EA need long-termism to stay cheerful, hopeful, and inspired about why we're so keen to minimize X risks and global catastrophic risks.

Eliezer's underrated fun theory sequence tackles this [? · GW]. 

comment by Vasco Grilo (vascoamaralgrilo) · 2022-04-30T20:27:41.992Z · EA(p) · GW(p)

"However, even the most fundamentalist Christians might be responsive to arguments that the total number of people we could create in the future -- who would all have save-able souls -- could vastly exceed the current number of Christians".

I had thought about the above before, thanks for pointing it out!

comment by HaydnBelfield · 2022-04-06T22:48:12.031Z · EA(p) · GW(p)

See also Neel Nanda's recent Simplify EA Pitches to "Holy Shit, X-Risk" [EA · GW].

Replies from: RyanCarey, Scott Alexander
comment by RyanCarey · 2022-04-07T18:08:16.584Z · EA(p) · GW(p)

No offense to Neel's writing, but it's instructive that Scott manages to write the same thesis so much better. It:

  • is 1/3 the length
    • Caveats are naturally interspersed, e.g. "Philosophers shouldn't be constrained by PR."
    • No extraneous content about Norman Borlaug, leverage, etc
  • has a less bossy title
  • distills the core question using crisp phrasing, e.g. "Does Long-Termism Ever Come Up With Different Conclusions Than Thoughtful Short-Termism?" (my emphasis)

...and a ton of other things. Long-live the short EA Forum post!

Replies from: MichaelDickens
comment by MichaelDickens · 2022-04-08T19:57:37.008Z · EA(p) · GW(p)

FWIW I would not be offended if someone said Scott's writing is better than mine. Scott's writing is better than almost everyone's.

Your comment inspired me to work harder to make my writings more Scott-like.

comment by Scott Alexander · 2022-04-07T01:09:41.258Z · EA(p) · GW(p)

Thanks, I had read that but failed to internalize how much it was saying this same thing. Sorry to Neel for accidentally plagiarizing him.

Replies from: Neel Nanda, HaydnBelfield
comment by Neel Nanda · 2022-04-07T11:42:46.309Z · EA(p) · GW(p)

No worries, I'm excited to see more people saying this! (Though I did have some eerie deja vu when reading your post initially...)

I'd be curious if you have any easy-to-articulate feedback re why my post didn't feel like it was saying the same thing, or how to edit it to be better? 

(EDIT: I guess the easiest object-level fix is to edit in a link at the top to your's, and say that I consider you to be making substantially the same point...)

comment by HaydnBelfield · 2022-04-12T21:57:50.650Z · EA(p) · GW(p)

I didn't mean to imply that you were plagiarising Neel. I more wanted to point out that that many reasonable people (see also Carl Shulman's podcast) are pointing out that the existential risk argument can go through without the longtermism argument. 

I posted the graphic below on twitter back in Nov. These three communities & sets of ideas overlap a lot and I think reinforce one another, but they are intellectually & practically separable, and there are people in each section doing great work. Just because someone is in one section doesn't mean they have to be, or are, committed to others.

comment by james.lucassen · 2022-04-06T23:40:01.506Z · EA(p) · GW(p)

Agree that X-risk is a better initial framing than longtermism - it matches what the community is actually doing a lot better. For this reason, I'm totally on board with "x-risk" replacing "longtermism" in outreach and intro materials. However, I don't think the idea of longtermism is totally obsolete, for a few reasons:

  • Longtermism produces a strategic focus on "the last person" that this "near-term x-risk" view doesn't. This isn't super relevant for AI, but it makes more sense in the context of biosecurity. Pandemics with the potential to wipe out everyone are way worse than pandemics which merely kill 99% of people, and the ways we prepare for them seem likely to differ. On this view, bunkers and civilizational recovery plans don't make much sense.
  • S-risks seem like they could very well be a big part of the overall strategy picture (even when not given normative priority and just considered as part of the total picture), and they aren't captured by the short-term x-risk view.
  • The numbers you give for why x-risk might be the most important cause areas even if we ignore the long-term future, $30 million for a 0.0001% reduction in X-risk, don't seem totally implausible. The world is big, and if you're particularly pessimistic about changing it, then this might not be enough to budge you. Throw in an extra 10^30, though, and you've got a really strong argument, if you're the kind of person that takes numbers seriously.

Submitting this now because it seems important, and I want to give this comment a chance to bubble to the top. Will fill in more reasons later if any major ones come up as I continue thinking.

Replies from: RobBensinger
comment by RobBensinger · 2022-04-07T03:26:10.005Z · EA(p) · GW(p)

S-risks seem like they could very well be a big part of the overall strategy picture (even when not given normative priority and just considered as part of the total picture), and they aren't captured by the short-term x-risk view.

Why not?

Replies from: Pablo_Stafforini, Hank_B, james.lucassen
comment by Pablo (Pablo_Stafforini) · 2022-04-07T21:13:50.879Z · EA(p) · GW(p)

An existential risk is a risk that threatens the destruction of humanity's long-term potential. But s-risks are worrisome not only because of the potential they threaten to destroy, but also because of what they threaten to replace this potential with (astronomical amounts of suffering).

Replies from: MichaelStJules
comment by MichaelStJules · 2022-04-08T15:20:21.351Z · EA(p) · GW(p)

I think the "short-term x-risk view" is meant to refer to everyone dying, and ignoring the lost long-term potential. Maybe s-risks could be similarly harmful in the short term, too.

comment by Hank_B · 2022-04-15T01:35:09.359Z · EA(p) · GW(p)

Spreading wild animals to space isn't bad for any currently existing humans or animals, so it isn't counted under thoughtful short-termism or is discounted heavily. Same with a variety of S-risks (e.g. eventual stable totalitarian regime 100+ years out, slow space colonization, slow build up of Matrioshka brains with suffering simulations/sub-routines, etc.)

comment by james.lucassen · 2022-04-07T06:06:38.442Z · EA(p) · GW(p)

Oop, thanks for correction. To be honest I'm not sure what exactly I was thinking originally, but maybe this is true for non-AI S-risks that are slow, like spreading wild animals to space? I think this is mostly just false tho  >:/

comment by Devin Kalish · 2022-04-06T23:36:38.124Z · EA(p) · GW(p)

I'm not so sure about this. Speaking as someone who talks with new EAs semi-frequently, it seems much easier to get people to take the basic ideas behind longtermism seriously than, say, the idea that there is a significant risk that they will personally die from unaligned AI. I do think that diving deeper into each issue sometimes flips reactions - longtermism takes you to weird places on sufficient reflection, AI risk looks terrifying just from compiling expert opinions - but favoring the approach that shifts the burden from the philosophical controversy to the empirical controversy doesn't seem like an obviously winning move. The move that seems both best for hedging this, and just the most honest, is being upfront both about your views on the philosophical and the empirical questions, and assume that convincing someone of even a somewhat more moderate version of either or both views will make them take the issues much more seriously.

Replies from: timunderwood
comment by timunderwood · 2022-04-08T19:37:21.774Z · EA(p) · GW(p)

Hmmmm, that is weird in a way, but also as someone who has in the last year been talking with new EAs semi-frequently, my intuition is that they often will not think about things the way I expect them to.

Replies from: Devin Kalish
comment by Devin Kalish · 2022-04-09T13:19:21.448Z · EA(p) · GW(p)

Really? I didn't find their reactions very weird, how would you expect them to react?

comment by RobertHarling · 2022-04-07T19:43:31.012Z · EA(p) · GW(p)

Thanks for this post! I think I have a different intuition that there are important practical ways where longtermism and x-risk views can come apart.  I’m not really thinking about this from an outreach perspective, more from an internal prioritisation view. (Some of these points have been made in other comments also, and the cases I present are probably not as thoroughly argued as they could be).
 

  • Extinction versus Global Catastrophic Risks (GCRs)
    • It seems likely that a short-termist with the high estimates of risks that Scott describes would focus on GCRs not extinction risks, and these might come apart.
    • To the extent that a short-termist framing views going from 80% to 81% population loss as equally as bad as 99% to 100%, it seems plausible to care less about e.g. refuges to evade pandemics. Other approaches like ALLFED and civilisational resilience work might look less effective on the short-termist framing also. Even if you also place some intrinsic weight on preventing extinction, this might not be enough to make these approaches look cost-effective.
  • Sensitivity to views of risk
    • Some people may be more sceptical of x-risk estimates this century, but might still reach the same prioritisation under the long-termist framing as the cost is so much higher. 
    • This maybe depends how hard you think the “x-risk is really high" pill is to swallow compared to the “future lives matter equally” pill.
  • Suspicious Convergence
    • Going from not valuing future generations to valuing future generations seems initially like a huge change in values where you’re adding this enormous group into your moral circle. It seems suspicious that this shouldn’t change our priorities.
    • It’s maybe not quite as bad as it sounds as it seems reasonable to expect some convergence between what makes lives today good and what makes future lives good. However especially if you’re optimising for maximum impact, you would expect these to come apart.
  • The world could be plausibly net negative
    • To the extent you think farmed animals suffer, and that wild animals live net negative lives, a large scale extinction event might not reduce welfare that much in the short-term. This maybe seems less true for a pandemic that would kill all humans (although presumably substantially reduce the number of animals in factory farms). But for example a failed alignment situation where all becomes paperclips doesn’t seem as bad if all the animals were suffering anyway.
  • The future might be net negative
    • If you think that, given no deadly pandemic, the future might be net negative (E.g. because of s-risks, or potentially "meh" futures, or you’re very sceptical about AI alignment going well) then preventing pandemics doesn’t actually look that good under a longtermist view.
  • General improvements for future risks/Patient Philanthropy
    • As Scott mentions, other possible long-termist approaches such as value spreading, improving institutions, or patient philanthropic investment doesn’t come up under the x-risk view. I think you should be more inclined to these approaches if you expect new risks to appear in the future, providing we make it past current risks.

It seems that a possible objection to all these points is that AI risk is really high and we should just focus on AI alignment (as it’s more than just an extinction risk like bio).


 

Replies from: Denkenberger
comment by Denkenberger · 2022-04-08T23:48:17.417Z · EA(p) · GW(p)

To the extent that a short-termist framing views going from 80% to 81% population loss as equally as bad as 99% to 100%, it seems plausible to care less about e.g. refuges to evade pandemics. Other approaches like ALLFED and civilisational resilience work might look less effective on the short-termist framing also. Even if you also place some intrinsic weight on preventing extinction, this might not be enough to make these approaches look cost-effective.

ALLFED-type work is likely highly cost effective from the short-term perspective; see global and country (US) specific analyses.

comment by Jan_Kulveit · 2022-04-07T14:04:09.523Z · EA(p) · GW(p)

I don't have a strong preference. There a some aspects in which longerism can be better framing, at least sometimes.

I. In a "longetermist" framework, x-risk reduction is the most important thing to work on for many orders of magnitude of uncertainty about the probability of x-risk in the next e.g. 30 years. (due to the weight of the long term future). Even if AI related x-risk is only 10ˆ-3 in next 30 years, it is still an extremely important problem or the most important one. In a "short-termist" view with, say, a discount rate of 5%, it is not nearly so clear.

The short-termist urgency of x-risk ("you and everyone you know will die") depends on the x-risk probability being actually high, like of order 1 percent, or tens of percents . Arguments why this probability is actually so high are usually brittle pieces of mathematical philosophy (eg many specific individual claims by Eliezer Yudkowsky) or brittle use of proxies with lot of variables obviously missing from the reasoning (eg the report by Ajeya Cotra). Actual disagreements about probabilities are often in fact grounded in black-box intuitions  about esoteric mathematical concepts.  It is relatively easy to come with brittle pieces of philosophy arguing in the opposite direction: why this number is low. In fact my actual, action guiding estimate is not based on an argument conveyable by a few paragraphs, but more on something like "feeling you get after working on this over years". What I can offer other is something like "an argument from testimony", and I don't think it's that great. 

II. Longermism is a positive word, pointing toward the fact that future could be large and nice. X-risk is the opposite. 

Similar: AI safety  vs AI alignment. My guess is the "AI safety" framing is by default more controversial and gets more of a pushback (eg  "safety department" is usually not the most loved part of an organisation, with connotations like "safety people want to prevent us from doing what we want")


 

comment by MichaelStJules · 2022-04-07T04:31:04.842Z · EA(p) · GW(p)

It's not clear the loss of human life dominates the welfare effects in the short term, depending on how much moral weight you assign to nonhuman animals and how their lives are affected by continued human presence and activity. It seems like human extinction would be good for farmed animals (dominated by chickens, fish and invertebrates), and would have unclear sign for wild animals (although my own best guess is that it would be bad for wild animals).

Of course, if you take a view that's totally neutral about moral patients who don't yet exist, then few of the nonhuman animals that would be affected are alive today, and what happens to the rest wouldn't matter in itself.

comment by Jack Malde (jackmalde) · 2022-04-08T05:45:45.191Z · EA(p) · GW(p)

I think there is a key difference between longtermists and thoughtful shorttermists which is surprisingly under-discussed.

Longtermists don’t just want to reduce x-risk, they want to permanently reduce x-risk to a low level I.e achieve existential security. Without existential security the longtermist argument just doesn’t go through. A thoughtful shorttermist who is concerned about x-risk probably won’t care about this existential security, they probably just want to reduce x-risk to the lowest level possible in their lifetime.

Achieving existential security may require novel approaches. Some have said AI can help us achieve it, others say we need to promote international cooperation, and others say we may need to maximise economic growth or technological progress to speed through the time of perils. These approaches may seem lacking to a thoughtful shorttermist who may prefer reducing specific risks.

Replies from: timunderwood
comment by timunderwood · 2022-04-08T19:48:17.446Z · EA(p) · GW(p)

Maybe, I mean I've been thinking about this a lot lately in the context of Phil Torres argument about messianic tendencies in long termism, and I think he's basically right that it can push people towards ideas that don't have any guard rails.

A total utilitarian long termist would prefer a 99 percent chance of human extinction with a 1 percent of a glorious transhuman future stretching across the lightcone to a 100 percent chance of humanity surviving for 5 billion years on earth.

That after all is what shutting up and multiplying tells you -- so the idea that long termism makes luddite solutions to X-risk (which to be clear, would also be incredibly difficult to impliment and maintain) extra unappealing relative to how a short termist might feel abou them seems right to me.

Of course there is also the other direction: If there was a 1/1 trillion chance that activating this AI would kill us all, and a 999 billion/ 1 trillion chance it would be awesome, but if you wait a hundred years you can have an AI that has only a 1/ 1 quadrillion chance of killing us all, a short termist pulls the switch, while the long termist waits.

 

Also, of course, model error, and any estimate where someone actually uses numbers like '1/1 trillion' that something will happen in the real world that is in the slightest interesting is a nonsense and bad calculation.

comment by Mathieu Putz · 2022-04-08T14:22:35.618Z · EA(p) · GW(p)

I think ASB's recent post about Peak Defense vs Trough Defense in Biosecurity [EA · GW] is a great example of how the longtermist framing can end up mattering a great deal in practical terms.

comment by N N · 2022-04-06T21:55:30.378Z · EA(p) · GW(p)

 MacAskill (who I believe coined the term?) does not think that the present is the hinge of history. I think the majority view among self-described longtermists is that the present is the hinge of history. But the term unites everyone who cares about things that are expected to have large effects on the long-run future (including but not limited to existential risk). 

I think the term's agnosticism about whether we live at the hinge of history and whether existential risk in the next few decades is high is a big reason for its popularity.

Replies from: Buck, Linch
comment by Buck · 2022-04-06T23:43:18.949Z · EA(p) · GW(p)

I think that the longtermist EA community mostly acts as if we're close to the hinge of history, because most influential longtermists disagree with Will on this. If Will's take was more influential, I think we'd do quite different things than we're currently doing.

Replies from: tylermjohn, jackmalde
comment by tylermjohn · 2022-04-07T10:55:14.941Z · EA(p) · GW(p)

I'd love to hear what you think we'd be doing differently. With JackM, I think if we thought that hinginess was pretty evenly distributed across centuries ex ante we'd be doing a lot of movement-building and saving, and then distributing some of our resources at the hingiest opportunities we come across at each time interval. And in fact that looks like what we're doing. Would you just expect a bigger focus on investment? I'm not sure I would, given how much EA is poised to grow and how comparably little we've spent so far. (Cf. Phil Trammell's disbursement tool https://www.philiptrammell.com/dpptool/)

comment by Jack Malde (jackmalde) · 2022-04-07T05:35:27.201Z · EA(p) · GW(p)

I think if we’re at the most influential point in history “EA community building” doesn’t make much sense. As others have said it would probably make more sense to be shouting about why we’re at the most influential point in history i.e. do “x-risk community building” or of course do more direct x-risk work.

I suspect we’d also do less global priorities research (although perhaps we don’t do that much as it is). If you think we’re at the most influential time you probably have a good reason for thinking that (x-risk abnormally high) which then informs what we should do (reduce it). So you wouldn’t need much more global priorities research. You would still need more granular research into how to reduce x-risk though.

More is also being said on the possibility of investing for the future financially which isn’t a great idea if we’re at the most influential time in history.

I agree the movement is mostly “hingy” in nature but perhaps not to the same extent you do. 80,000 Hours is an influential body that promotes EA community building, global priorities research, and to some extent investing for the future.

Replies from: Stefan_Schubert, Jay Bailey
comment by Stefan_Schubert · 2022-04-07T11:12:16.871Z · EA(p) · GW(p)

I think if we’re at the most influential point in history “EA community building” doesn’t make much sense. 

I'm not sure I agree with that. It seems to me that EA community building is channelling quite a few people to direct existential risk reduction work.

Replies from: jackmalde
comment by Jack Malde (jackmalde) · 2022-04-07T11:30:24.064Z · EA(p) · GW(p)

My point is that you could engage in "x-risk community building" which may more effectively get people working on reducing x-risk than "EA community building" would.

Replies from: Stefan_Schubert
comment by Stefan_Schubert · 2022-04-07T11:38:52.496Z · EA(p) · GW(p)

There is a bunch of consideration affecting that, including that we already do EA community building and that big switches tend to be costly. However that pans out in aggregate I think "doesn't make much sense" is an overstatement.

Replies from: jackmalde
comment by Jack Malde (jackmalde) · 2022-04-07T11:43:44.309Z · EA(p) · GW(p)

I never actually said we should switch, but if we knew from the start “oh wow we live at the most influential time ever because x-risk is so high” we probably would have created an x-risk community not an EA one.

And to be clear I’m not sure where I personally come out on the hinginess debate. In fact I would say I’m probably more sympathetic to Will’s view that we currently aren’t at the most influential time than most others are.

Replies from: timunderwood
comment by timunderwood · 2022-04-08T19:42:21.130Z · EA(p) · GW(p)

My feeling is that it was a bit that people who wanted to attack global poverty efficiently decided to call themselves effective altruists, and then a bunch of Less Wrongers came over and convinced (a lot of) them that 'hey, going extinct is an even biggler deal', but the name still stuck, because names are sticky things.

comment by Jay Bailey · 2022-04-07T05:58:16.440Z · EA(p) · GW(p)

That also depends on how wide you consider a "point". A lot of longtermists talk of this as the "most important century", not the most important year, or even decade. Considering EA as a whole is less than twenty years old, investing in EA and global priorities research might still make sense, even under a simplified model where 100% of the impact EA will ever have occurs by 2100, and then we don't care any more. Given a standard explore/exploit  algorithm, we should spend around 37% of the space exploring, so if we assume EA started around 2005, we should still be exploring until 2040 or so before pivoting and going all-in on the best things we've found.

comment by Linch · 2022-04-07T14:05:50.120Z · EA(p) · GW(p)

Some loose data on this: 

Of the ~900 people who filled my Twitter poll about whether we lived in the most important century, about 1/3 said "yes," about 1/3 said "no," and about 1/3 said "maybe."

comment by sawyer · 2022-04-08T18:32:55.801Z · EA(p) · GW(p)

As Nathan Young mentioned in his comment, this argument is also similar to Carl Shulman's view expressed in this podcast: https://80000hours.org/podcast/episodes/carl-shulman-common-sense-case-existential-risks/

comment by ben.smith · 2022-04-07T05:38:44.531Z · EA(p) · GW(p)

Speaking about AI Risk particularly, I haven't bought into the idea there's a "cognitively substantial" chance AI could kill us all by 2050. And even if I had done, many of my interlocutors haven't either. There's two key points to get across to bring the average interlocutor on the street or at a party into an Eliezer Yudkowsky level of worrying:

  • Transformative AI will happen likely happen within 10 years, or 30
  • There's a significant chance it will kill us all, or at least a catastrophic number of people (e.g. >100m)

It's not trivial to convince people of either of these points without sounding a little nuts. So I understand why some people prefer to take the longtermist framing. Then it doesn't matter whether transformative AI will happen in 10 years or 30 or 100, and you only have make the argument about why you should care about the magnitude of this problem.

If I think AI has a maybe 1% chance of being a catastrophic disaster, rather than, say, the 1/10 that Toby Ord gives it over the next 100 years or the higher risk that Yud gives it (>50%? I haven't seen him put a number to it)...then I have to go through the additional step of explaining to someone why they should care about a 1% risk of something. After the pandemic, where the statistically average person has a ~1% chance of dying from covid, it has been difficult to convince something like 1/3 of the population to give a shit about it. The problem with small numbers like 1%, or even 10%, is a lot of people just shrug and dismiss them. Cognitively they round to zero. But the conversation "convince me 1% matters" can look a lot like just explaining longtermism to someone.

Replies from: Jay Bailey
comment by Jay Bailey · 2022-04-07T05:44:33.032Z · EA(p) · GW(p)

The way I like to describe it to my Intro to EA cohorts in the Existential Risk week is to ask "How many people, probabilistically, would die each year from this?"

So, if I think there's a 10% chance AI kills us in the next 100 years, that's 1 in 1,000 people "killed" by AI each year, or 7 million per year - roughly 17x more than malaria. 

If I think there's a 1% chance, AI risk kills 700,000 - it's still just as important as malaria prevention, and much more neglected.

If I think there's an 0.1% chance, AI kills 70,000 - a non-trivial problem, but not worth spending as many resources on as more likely concerns.

That said, this only covers part of the inferential distance - people in Week 5 of the Intro to EA cohort are already used to reasoning quantitatively about things and analysing cost-effectiveness.

comment by Harry Taussig (Harry_Taussig) · 2022-04-09T16:45:02.614Z · EA(p) · GW(p)

Thank you for writing this! This helped me understand my negative feelings towards long-termist arguments so much better. 

In talking to many EA University students and organizers, so many of them have serious reservations about long-termism as a philosophy, but not as a practical project because long-termism as a practical project usually means don't die in the next 100 years, which is something we can pretty clearly make progress on (which is important since the usual objection is that maybe we can't influence the long-term future). 

I've been frustrated that in the intro fellowship and in EA conversations we must take such a strange path to something so intuitive: let's try to avoid billions of people dying this century. 

comment by Andrew Critch (critch) · 2022-05-16T02:46:05.952Z · EA(p) · GW(p)

Scott, thanks so much for this post.  It's been years coming in my opinion.  FWIW, my reason for making ARCHES (AI Research Considerations for Human Existential Safety) explicitly about existential risk, and not about "AI safety" or some other glomarization, is that I think x-risk and x-safety are not long-term/far-off concerns that can be procrastinated away.  

https://forum.effectivealtruism.org/posts/aYg2ceChLMRbwqkyQ/ai-research-considerations-for-human-existential-safety [EA · GW]  (with David Krueger)

Ideally, we need to engage as many researchers as possible, thinking about as many aspects of a functioning civilization as possible, to assess how A(G)I can creep into those corners of civilization and pose an x-risk, with cybersecurity / internet infrastructure and social media being extremely vulnerable fronts that are easily salient today.  

As I say this, I worry that other EAs will get worried that talking to folks working on cybersecurity or recommender systems necessarily means abandoning existential risk as a priority, because those fields have not historically taken x-risk seriously.   

However, for better or for worse, it's becoming increasingly easy for everyone to imagine cybersecurity and/or propaganda disasters involving very powerful AI systems, such that x-risk is increasingly not-a-stretch-for-the-imagination.  So, I'd encourage anyone who feels like "there is no hope to convince [group x] to care" to start re-evaluating that position (e.g., rather than aiming/advocating for drastic interventions like invasive pivotal acts).  I can't tell whether or not you-specifically are in the "there is no point in trying" camp, but others might be,  and in any case I thought it might be good to bring up

In summary: as tech gets scarier, we should have some faith that people will be more amenable to arguments that it is in fact dangerous, and re-examine whether this-group or that-group is worth engaging on the topic of existential safety as a near-term priority.

comment by Charles He · 2022-04-07T17:32:16.148Z · EA(p) · GW(p)

Imagine it's 2022. You wake up and check the EA forum to see that Scott Alexander has a post knocking the premise of longtermism and it's sitting in at 200 karma. On top, Holden Karnofsky has a post [EA · GW] saying he may be only 20% convinced that x-risk itself is overwhelmingly important. Also, Joey Savoie is hanging in there [EA · GW]. 
 

 

Obviously, I’ll write in to support longtermism.

Below is a one long story about how some people might change their views, in this story, x-risk alone wouldn't work. 

Replies from: Charles He
comment by Charles He · 2022-04-07T17:46:52.471Z · EA(p) · GW(p)

TLDR; Some people think the future is really bad and don't value it. You need something besides x-risk, to engage them, like a competent and coordinated movement to improve the future. Without this, x-risk and other EA work might be meaningless too. This explanation below has an intuitive or experiential quality, not numerical. I don't know if this is actually longtermism.
 

Many people don't consider future generations valuable because they have a pessimistic view of human society. I think this is justifiable. 

Then, if you think society will remain in its current state, it's reasonable that you might not want to preserve it. If you only ever think about one or two generations into the future, like I think most people do, it's hard to see the possibility of change. So I think this "negative" mentality is self-reinforcing, they're stuck. 

To these people, the idea of x-risk doesn't make sense, not because these dangers aren't real but because there isn't anything to preserve. To these people, giant numbers like 10^30 are really, especially unconvincing, because they seem silly and, if anything, we owe the future a small society. 

I think the above is an incredibly mainstream view. Many people with talent, perception and resources might hold it.


 

The alternative to the mindset above is to see a long future that has possibilities. That there is a substantial possibility that things can be a lot better. And that it is viable to actually try to influence it. 

I think these three sentences above seem "simple", but for this to substantially enter some people's world view, these ideas need to go together at the same time. Because of this, it's non-obvious and unconvincing.

I think one reason why the idea or movement for influencing the future is valuable is that most people don't know anyone who is seriously trying. It takes a huge amount of coordination and resources to do this. It's bizarre to do this on your own or with a small group of people.

I think everyone, deep down, wants to be optimistic about the future and humanity. But they don't take any action or spend time thinking about it. 

With an actual strong movement that seems competent, it is possible  to convince people there can be enough focus and investments that are viable to improve the future. It is this viability and assessment that produces a mental shift to optimism and engagement. 

So this is the value of presenting the long term future in some way.

 

To be clear, in making this shift, people are being drawn in by competence. Competence involves "rational" thinking, planning and calculation, and all sorts of probabilities and numbers. 

But for these people, despite what is commonly presented, I'm not sure focusing on numbers, or using Bayes, etc. may play any role in this presentation. If someone told me they changed their worldview because they ran numbers, I would be suspicious. Even now, most of the time, I am skeptical when I see huge numbers or intricate calculations. 

Instead, this is a mindset or worldview that is intuitive. To kind of see this, this text seems convincing [EA(p) · GW(p)] ("Good ideas change the world, or could possibly save it...") but doesn't use any calculations. I think this sort of thinking is how most people actually change their views about complex topics. 

To have this particular change in view, I think you still need to have further beliefs that might be weird or unusual:

  • You need to have a sense of personal agency, that you can affect the future through your own actions, even though there are billions of people. This might be aggressive or wrong.
  • You might also need to have judgment of society and institutions that are "just right".
    • You need to believe society could go down a bad path because institutions are currently dysfunctional and fragile.
    • Yet, you need to believe it's possible to design ones that are robust to change the future.

 

I have no idea if the above is longtermism at all. This seems sort of weak, and seems like it only would compel me to execute my particular beliefs. 

It seems sort of surprising if many people had this particular viewpoint in this comment. 

This viewpoint does have the benefit that you could ask questions to interrogate these beliefs (people couldn't just say there's "10^42 people" or something).

comment by Greg_Colbourn · 2022-04-07T09:43:15.390Z · EA(p) · GW(p)

Yes! Thanks for this Scott. X-risk prevention is a cause that both neartermists and longtermists can get behind. I think it should be reinstated as a top-level EA cause area in it's own right, distinct from longtermism (as I've said here [EA(p) · GW(p)]).

if you're under ~50, unaligned AI might kill you and everyone you know. Not your great-great-(...)-great-grandchildren in the year 30,000 AD. Not even your children. You and everyone you know.

It's a sobering thought. See also: AGI x-risk timelines: 10% chance (by year X) estimates should be the headline, not 50%. [EA · GW]

comment by Michael_Wiebe · 2022-04-08T00:26:53.238Z · EA(p) · GW(p)

Are there actually any short-termists? Eg. people who have nonzero pure time preference?

Replies from: Vanessa, MichaelDickens
comment by Vanessa · 2022-04-09T17:08:44.357Z · EA(p) · GW(p)

IMO everyone have pure time preference (descriptively, as a revealed preference). To me it just seems commonsensical, but it is also very hard to mathematically make sense of rationality without pure time preference, because of issues with divergent/unbounded/discontinuous utility functions. My speculative 1st approximation theory of pure time preference for humans is: choose a policy according to minimax regret over all exponential time discount constants starting from around the scale of a natural human lifetime and going to infinity. For a better approximation, you need to also account for hyperbolic time discount.

Replies from: Michael_Wiebe, Guy Raveh
comment by Michael_Wiebe · 2022-04-09T21:01:36.520Z · EA(p) · GW(p)

Can't you get the integral to converge with discounting for exogenous extinction risk and diminishing marginal utility? You can have pure time preference = 0 but still have a positive discount rate.

Replies from: Vanessa
comment by Vanessa · 2022-04-10T09:22:42.138Z · EA(p) · GW(p)

The question is, what is your prior about extinction risk? If your prior is sufficiently uninformative, you get divergence. If you dogmatically  believe in extinction risk, you can get convergence but then it's pretty close to having intrinsic time discount.  To the extent it is not the same, the difference comes through privileging hypotheses that are harmonious with your dogma about extinction risk, which seems questionable.

Replies from: Michael_Wiebe
comment by Michael_Wiebe · 2022-04-10T16:55:34.153Z · EA(p) · GW(p)

Yes, if the extinction rate is high (and precise) enough , then it converges, but otherwise not.

Regarding your first comment, I'm focusing on the normative question, not descriptive (ie. what should a social planner do?). So I'm asking if there are EAs who think a social planner should have nonzero pure time preference.

Replies from: Vanessa
comment by Vanessa · 2022-04-11T05:26:25.856Z · EA(p) · GW(p)

I dunno if I count as "EA", but I think that a social planner should have nonzero pure time preference, yes.

Replies from: Michael_Wiebe
comment by Michael_Wiebe · 2022-04-11T20:39:38.107Z · EA(p) · GW(p)

Why?

Replies from: Vanessa
comment by Vanessa · 2022-04-12T04:49:41.931Z · EA(p) · GW(p)

Because, ceteris paribus I care about things that happen sooner more than about things that happen latter. And, like I said, not having pure time preference seems incoherent. 

As a meta-sidenote, I find that arguments about ethics are rarely constructive, since there is too little in the way of agreed-upon objective criteria and too much in the way of social incentives to voice / not voice certain positions. In particular when someone asks why I have a particular preference, I have no idea what kind of justification they expect (from some ethical principle they presuppose? evolutionary psychology? social contract / game theory?)

Replies from: jackmalde
comment by Jack Malde (jackmalde) · 2022-04-12T05:36:59.501Z · EA(p) · GW(p)

Because, ceteris paribus I care about things that happen sooner more than about things that happen latter.

This is separate to the normative question of whether or not people should have zero pure time preference when it comes to evaluating the ethics of policies that will affect future generations. Surely the fact that I'd rather have some cake today rather than tomorrow cannot be relevant when I'm considering whether or not I should abate carbon emissions so my great grandchildren can live in a nice world - these simply seem separate considerations with no obvious link to each other. If we're talking about policies whose effects don't (predictably) span generations I can perhaps see the relevance of my personal impatience, but otherwise I don't.

Also,  having non-zero pure time preference has counterintuitive implications. From here:

If applied consistently to the past, a modest rate of time preference of just 1% per annum would imply that Tutankhamen was more important than all 7 billion humans alive today.

So if hypothetically we were alive around King Tut's time and we were given the mandatory choice to either torture him or, with certainty, cause the torture of all 7 billion humans today we would easily choose the latter with a 1% rate of pure time preference (which seems obviously wrong to me).

If you do want non-zero rate of pure time preference you will probably need it to decline quickly over time to make much ethical sense (see here and my explanation here [EA · GW]).

Replies from: Vanessa
comment by Vanessa · 2022-04-12T09:47:03.585Z · EA(p) · GW(p)

This is separate to the normative question of whether or not people should have zero pure time preference when it comes to evaluating the ethics of policies that will affect future generations.

I am a moral anti-realist. I don't believe in ethics the way utilitarians (for example) use the word. I believe there are certain things I want, and certain things other people want, and we can coordinate on that. And coordinating on that requires establishing social norms, including what we colloquially refer to as "ethics". Hypothetically, if I have time preference and other people don't then I would agree to coordinate on a compromise. In practice, I suspect that everyone have time preference.

So if hypothetically we were alive around King Tut's time and we were given the mandatory choice to either torture him or, with certainty, cause the torture of all 7 billion humans today we would easily choose the latter with a 1% rate of pure time preference (which seems obviously wrong to me).

You can avoid this kind of conclusions if you accept my decision rule of minimax regret over all discount timescales from some finite value to infinity.

Replies from: jackmalde
comment by Jack Malde (jackmalde) · 2022-04-12T21:14:11.268Z · EA(p) · GW(p)

Hypothetically, if I have time preference and other people don't then I would agree to coordinate on a compromise. In practice, I suspect that everyone have time preference.

Most people do indeed have pure time preference in the sense that they are impatient and want things earlier rather than later. However, this says nothing about their attitude to future generations.

Being impatient means you place more importance on your present self than your future self, but it doesn't mean you care more about the wellbeing of some random dude alive now than another random dude alive in 100 years. That simply isn't what "impatience" means.

For example - I am impatient. I personally want things sooner rather than later in my life. I don't however think that the wellbeing of a random person now is more important than the wellbeing of a random person alive in 100 years. That's an entirely separate consideration to my personal impatience.

comment by Guy Raveh · 2022-04-10T22:43:03.168Z · EA(p) · GW(p)

I mean, physics solves the divergence/unboundedness Problem with the universe achieveing heat death eventually. So one can assume some distribution on the time bound, at the very least. Whether that makes having no time discount reasonable in practice, I highly doubt.

comment by MichaelDickens · 2022-04-08T17:49:09.857Z · EA(p) · GW(p)

I don't know of any EAs or philosophers with a nonzero pure time preference, but it's pretty common to believe that creating new lives is morally neutral. Someone who believes this might plausibly be a short-termist. I have a few friends who are short-termist for that reason.

Replies from: Michael_Wiebe
comment by Michael_Wiebe · 2022-04-08T19:05:20.976Z · EA(p) · GW(p)

Hmm, is it consistent to have zero pure time preference and be indifferent to creating new lives?

Replies from: MichaelDickens
comment by MichaelDickens · 2022-04-08T19:51:40.886Z · EA(p) · GW(p)

Yeah, the two things are orthogonal as far as I can see. The person-affecting view is perfectly with consistent with either a zero or a nonzero pure time preference.

Replies from: Michael_Wiebe
comment by Michael_Wiebe · 2022-04-08T22:00:47.664Z · EA(p) · GW(p)

Okay, so you could hold the person-affecting view and be indifferent to creating new lives, but also have zero pure time preference in that you don't value future lives any less because they're in the future.

So this is really getting at creating new lives vs how to treat them given that they already exist.

comment by Charlie_Guthmann (Charles_Guthmann) · 2022-04-06T22:35:43.396Z · EA(p) · GW(p)

Longtermism =/ existential risk, though it seems the community has more or less decided they mean similar things (at least while at our current point in history).

Here is an argument to the contrary- "the civilization dice roll": Current Human society becoming grabby will be worse for the future of our lightcone than the counterfactual society that will(might) exist and end up becoming grabby if we die out/ our civilization collapses.

Now, to directly answer your point on x-risk vs longtermism, yes you are correct. Fear mongering will always trump empathy mongering in terms of getting people to care. We might worry though that in a society already full of fear mongering, we actually need to push people to build their thoughtful empathy muscles, not their thoughtful fear muscles.  That is to say we want people to care about x-risk because they care about other people, not because they care about themselves.

So now turning back to the dice roll argument, we may prefer to survive because we became more empathetic/expanded our moral circle and as a result cared about x-risk, rather than because we just really really didn't want to die in the short-term.  Once (if) we pass the hinge of history, or at least the peak of existential risk, we still have to decide what the fate of our ecosystem will be. Personally, I would prefer we decide with maximal moral circles. 

Some potential gaps in my argument. (1) There might be reasons to believe that our lightcone will be better off with current human society becoming grabby, in which case we really should just be optimizing almost exclusively on reducing x-risk (probably). (2)  Focusing on Fear mongering x-risk rather than empathy mongering x-risk will not decrease the likelihood of people expanding their moral circles , maybe it will even increase moral circle expansion because it will actually get people to grapple with the possibility of these issues (3) Moral circle expansion won't actually make the future go better (4) AI will be uncorrelated with human culture, so this whole argument is sort of irrelevant if the AI does the grabbing.

comment by WilliamKiely · 2022-04-08T04:14:44.857Z · EA(p) · GW(p)
But I don't think long-termists are actually asking for $30 million to make the apocalypse 0.0001% less likely - both because we can't reliably calculate numbers that low, and because if you had $30 million you could probably do much better than 0.0001%.

Agreed. Linch's .01% Fund [EA · GW] post proposes a research/funding entity that identifies projects that can reduce existential risk by 0.01% for $100M-$1B. This is 3x-30x as cost-effective as the quoted text and targeting a reduction 100x the size.

comment by Nathan Young (nathan) · 2022-04-07T09:01:11.779Z · EA(p) · GW(p)

I have been working on a tweet length version of this argument for a while. I encourage someone to beat me to it. I agree with Neel and Scott (and Carl Shulman) that this argument is much more succinct and emotive and I think I should get better at making it.

Something like:

[quote tweeting a poll on survival to 2100] 38% of my followers think there is a > 5% chance all humans are dead by 2100. Let's assume they are way wrong and it's only .5%. 

[how does this compare to other things that might kill you]

[how does this compare in terms of spending to how much ought to be spent to how much is]

Replies from: nathan
comment by Nathan Young (nathan) · 2022-04-07T09:42:42.854Z · EA(p) · GW(p)

Here is v1.0. Can you do better? https://twitter.com/NathanpmYoung/status/1512000005254664194?s=20&t=LnIr0K87oWgFlqP6qKH4IQ

comment by MichaelStJules · 2022-04-07T04:54:16.348Z · EA(p) · GW(p)

In practice, I think there's almost never an option to save 1 billion people from poverty with certainty. When I said that there was, that was a hack I had to put in there to make the math work out so that the short-termist would come to a different conclusion from the long-termist.

GiveDirectly could get pretty high probabilities (or close for a smaller number of people at lower cost), although it's not the favoured intervention of those focused on global health and poverty.

Another notable remaining difference is that extinction is all or nothing, so your chance (and the whole community's chance) of doing any good at all is much lower, although its impact would be much higher when you do make a difference.

When people allocate money to causes other than existential risk, I think it's more often as a sort of moral parliament maneuver, rather than because they calculated it out and the other cause is better in a way that would change if we considered the long-term future.

I would guess it's usually based on requiring higher standards of evidence to support an intervention (and greater skepticism without), so they actually think GiveWell interventions are more cost-effective on the margin.

comment by LiaH · 2022-04-18T06:16:43.831Z · EA(p) · GW(p)

"Value change" type work - gradually shifting civilizational values to those more in line with human flourishing - might fall into this category too.

This is the first I have seen reference to norm changing in EA. Is there other writing on this idea?

comment by WilliamKiely · 2022-04-08T19:35:51.340Z · EA(p) · GW(p)
projects that increase the rate of GDP growth by .01% per year in a way that compounds over many centuries

Michael Wiebe comments: "Can we please stop talking about GDP growth like this? There's no growth dial that you can turn up by 0.01, and then the economy grows at that rate forever. In practice, policy changes have one-off effects on the level of GDP, and at best can increase the growth rate for a short time before fading out. We don't have the ability to increase the growth rate for many centuries."