Median household income (worldwide, not in USA) is the thing that sticks with me the most and seems most eye-opening... Looking it up now, it seems that it is $15,900 per year. Imagine your entire household bringing in that much, and then think: that's what life would be like if we were right in the middle.
Good point, I'll add analogy to the list. Much that is called reference class forecasting is really just analogy, and often not even a good analogy.
I really think we should taboo "outside view." If people are forced to use the term "reference class" to describe what they are doing, it'll be more obvious when they are doing epistemically shitty things, because the term "reference class" invites the obvious next questions: 1. What reference class? 2. Why is that the best reference class to use?
I agree it's hard to police how people use a word; thus, I figured it would be better to just taboo the word entirely.
I totally agree that it's hard to use reference classes correctly, because of the reference class tennis problem. I figured it was outside the scope of this post to explain this, but I was thinking about making a follow-up... at any rate, I'm optimistic that if people actually use the words "reference class" instead of "outside view" this will remind them to notice how there are more than one reference class available, how it's important to argue that the one you are using is the best, etc.
I'm surprised you don't mention what seems to me to be the most likely scenario, 0. : Mutually assured destruction, nuclear winter, etc. The world looks like 1 or 2 up until some series of accidents and mistakes causes sufficiently many nukes to be fired that we end up in nuclear winter.
(Think about the history of cold war nuclear close calls. Now imagine that sort of thing is happening not just between two countries but everywhere. Surely there would be accidental escalations to full-on nuclear combat at least sometimes, and when two countries are going at it with nukes, probably that raises the chances of other countries getting involved on purpose or on accident)
Sorry for the delayed reply! Didn't notice this until now.
Sure, I'd be happy to see your slides, thanks! Looking at your post on FAI and valence, it looks like reasons no. 3, 4, 5, and 9 are somewhat plausible to me. I also agree that there might be philosophical path-dependencies in AI development and that doing some of the initial work ourselves might help to discover them--but I feel like QRI isn't aimed at this directly and could achieve this much better if it was; if it happens it'll be a side-effect of QRI's research.
For your flipped criticism:
--I think bolstering the EA community and AI risk communities is a good idea --I think "blue sky" research on global priorities, ethics, metaphilosophy, etc. is also a good idea if people seem likely to make progress on it --Obviously I think AI safety, AI governance, etc. are valuable --There are various other things that seem valuable because they support those things, e.g. trying to forecast decline of collective epistemology and/or prevent it.
--There are various other things that don't impact AI safety but independently have a decently strong case that they are similarly important, e.g. ALLFED or pandemic preparedness.
--I'm probably missing a few things --My metaphysical uncertainty... If you mean how uncertain am I about various philosophical questions like what is happiness, what is consciousness, etc., then the answer is "very uncertain." But I think the best thing to do is not try to think about it directly now, but rather to try to stabilize the world and get to the Long Reflection so we can think about it longer and better later.
Yep, that's roughly correct as a statement of my position. Thanks. I guess I'd put it slightly differently in some respects -- I'd say something like "A good test for whether to do some EA project is how likely it is that it's within a few orders of magnitude as good as AI safety work. There will be several projects for which we can tell a not-too-implausible story for how they are close to as good or better than AI safety work, and then we can let tractibility/neglectedness/fit considerations convince us to do them. But if we can't even tell such a story in the first place, that's a pretty bad sign." The general thought is: AI safety is the "gold standard" to compare against, since it's currently No. 1 priority in my book. (If something else was No. 1, it would be my gold standard.)
I think QRI actually can tell such a story, I just haven't heard it yet. In the comments it seems that a story like this was sketched. I would be interested to hear it in more detail. I don't think the very abstract story of "we are trying to make good experiences but we don't know what experiences are" is plausible enough as a story for why this is close to as good as AI safety. (But I might be wrong about that too.)
re: A: Hmmm, fair enough that you disagree, but I have the opposite intuition.
re: B: Yeah I think even the EA community underweights AI safety. I have loads of respect for people doing animal welfare stuff and global poverty stuff, but it just doesn't seem nearly as important as preventing everyone from being killed or worse in the near future. It also seems much less neglected--most of the quality-adjusted AI safety work is being done by EA-adjacent people, whereas that's not true (I think?) for animal welfare or global poverty stuff. As for traceability, I'm less sure how to make the comparison--it's obviously much more tractable to make SOME improvement to animal welfare or the lives of the global poor, but if we compare helping ALL the animals / ALL the global poor to AI safety, it actually seems less tractable (while still being less important and less neglected.) There's a lot more to say about this topic obviously, I worry I come across as callous or ignorant of various nuances... so let me just say I'd love to discuss with you further and hear your thoughts.
re: D: I'm certainly pretty uncertain about the improving collective sanity thing. One reason I'm more optimistic about it than QRI is that I see how it plugs in to AI safety: If we improve collective sanity, that massively helps with AI safety, whereas if we succeed at understanding consciousness better, how does that help with AI safety? (QRI seems to think it does, I just don't see it yet) Therefore sanity-improvement can be thought of as similarly important to AI safety (or alternatively as a kind of AI safety intervention) and the remaining question is how tractable and neglected it is. I'm unsure, but one thing that makes me optimistic about tractability is that we don't need to improve sanity of the entire world, just a few small parts of the world--most importantly, our community, but also certain AI companies and (maybe) governments. And even if all we do is improve sanity of our own community, that has a substantially positive effect on AI safety already, since so much of AI safety work comes from our community. As for neglectedness, yeah IDK. Within our community there is a lot of focus on good epistemology and stuff already, so maybe the low-hanging fruit has been picked already. But subjectively I get the impression that there are still good things to be doing--e.g. trying to forecast how collective epistemology in the relevant communities could change in the coming years, building up new tools (such as Guesstimate or Metaculus) ...
I don't think I would say the same thing to every project discussed on the EA forum. I think for every non-AI-focused project I'd say something similar (why not focus instead on AI?) but the bit about how I didn't find QRI's positive pitch compelling was specific to QRI. (I'm a philosopher, I love thinking about what things mean, but I think we've got to have a better story than "We are trying to make more good and less bad experiences, therefore we should try to objectively quantify and measure experience." Compare: Suppose it were WW2, 1939. We are thinking of various ways to help the allied war effort. An institute designed to study "what does war even mean anyway? What does it mean to win a war? let's try to objectively quantify this so we can measure how much we are winning and optimize that metric" is not obviously a good idea. Like, it's definitely not harmful, but it wouldn't be top priority, especially if there are various other projects that seem super important, tractable, and neglected, such as preventing the Axis from getting atom bombs. (I think of the EA community's position with respect to AI as analogous to the position re atom bombs held by the small cohort of people in 1939 "in the know" about the possibility. It would be silly for someone who knew about atom bombs in 1939 to instead focus on objectively defining war and winning.)
But yeah, I would say to every non-AI-related project something like "Will your project be useful for making AI go well? How?" And I think that insofar as one could do good work on both AI safety stuff and something else, one should probably choose AI safety stuff. This isn't because I think AI safety stuff is DEFINITELY the most important, merely that I think it probably is. (Also I think it's more neglected AND tractable than many, though not all, of the alternatives people typically consider)
Some projects I think are still worth pursuing even if they don't help make AI go well. For example, bio risk, preventing nuclear war, improving collective sanity/rationality/decision-making, ... (lots of other things would be added, it all depends on tractibility + neglectedness + personal fit.) After all, maybe AI won't happen for many decades or even centuries. Or maybe one of those other risks is more likely to happen soon than it appears.
Anyhow, to sum it all up: I agree that we shouldn't be super confident that AI is the most important thing. Depending on how broadly you define AI, I'm probably about 80-90% confident. And I agree that this means our community should explore a portfolio of ideas rather than just one. Nevertheless, I think even our community is currently less focused on AI than it should be, and I think AI is the "gold standard" so to speak that projects should compare themselves to, and moreover I think QRI in particular has not done much to argue for their case. (Compare with, say, ALLFED which has a pretty good case IMO: There's at least a 1% chance of some sort of global agricultural shortfall prior to AI getting crazy, and by default this will mean terrible collapse and famine, but if we prepare for this possibility it could instead mean much better things (people and institutions surviving, maybe learning)).
My criticism is not directly of QRI but of their argument as presented here. I expect that if I talked with them and heard more of their views, I'd hear a better, more expanded version of the argument that would be much more convincing. In fact I'd say 40% chance QRI ends up seeming better than ALLFED to me after such a conversation. For example, I myself used to think that consciousness research was really important for making AI go well. It might not be so hard to convince me to switch back to that old position.
1. To what extent do some journalists use the Chinese Robber Fallacy deliberately -- they know that they have a wide range of even-worse, even-bigger tragedies and scandals to report on, but they choose to report on the ones that let them push their overall ideology or political agenda? (And they choose not to report on the ones that seem to undermine or distract from their ideology/agenda) 2. Do you agree with the "The parity inverse of a meme is the same meme in a different point of its life cycle" idea? In other words, do you agree with the "Toxoplasma of Rage" thesis?
I currently think consciousness research is less important/tractable/neglected than AI safety, AI governance, and a few other things. The main reason is that it totally seems to me to be something we can "punt to the future" or "defer to more capable successors" to a large extent. However, I might be wrong about this. I haven't talked to QRI at length sufficient to truly evaluate their arguments. (See this exchange, which is about all I've got.)
A common misconception about propaganda is the idea it comes from deliberate lies (on the part of media outlets) or from money changing hands. In my personal experience colluding with the media no money changes hands and no (deliberate) lies are told by the media itself. ... Most media bias actually takes the form of selective reporting. ... Combine the Chinese Robbers Fallacy with a large pool of uncurated data and you can find facts to support any plausible thesis.
Even when a news outlet is broadcasting a lie, their government is unlikely to prosecute them for promoting official government policy. Newspapers abnegate responsibility for truth by quoting official sources. You get away (legally) straight-up lying about medical facts if you are quoting the CDC.
News outlets' unquestioning reliance on official sources comes from the economics of their situation. It is cheaper to republish official statements without questioning them. The news outlet which produces the cheapest news outcompetes outlets with higher expenditure.
I wonder if you think the EA community is too slow to update their strategies here. It feels like what is coming is easily among the most difficult things humanity ever has to get right and we could be doing much more if we all took current TAI forecasts more into account.
You guessed it -- I believe that most of EA's best and brightest will end up having approximately zero impact (compared to what they could have had) because they are planning for business-as-usual. The twenties are going to take a lot of people by surprise, I think. Hopefully EAs working their way up the academic hierarchy will at least be able to redirect prestige/status towards those who have been building up expertise in AI safety and AI governance, when the time comes.
I think that if I were going on outside-view economic arguments I'd probably be <50% singularity by 2100.
To what extent is this a repudiation of Roodman's outside-view projection? My guess is you'd say something like "This new paper is more detailed and trustworthy than Roodman's simple model, so I'm assigning it more weight, but still putting a decent amount of weight on Roodman's being roughly correct and that's why I said <50% instead of <10%."
Thanks! I wonder if some sort of two-tiered system would work, where there's a value-aligned staff member who is part of the core team and has lots of money and flexibility and so forth, and then they have a blank check to hire contractors who aren't value-aligned to do various things. That might help the value-aligned staff member from becoming overworked. Idk though, I have no idea what I'm talking about. What do you think?
One point in favor of 1984 and Animal Farm is that Orwell was intimitely familiar with real-life totalitarian regimes, having fought for the communists in Spain etc. His writing is more credible IMO because he's criticizing the side he fought for rather than the side he fought against. (I mean, he's criticizing both, for sure--his critiques apply equally to fascism--but most authors who warn us of dystopian futures are warning us against their outgroup, so to speak, whereas Orwell is warning us against what used to be his ingroup.)
Thanks, this was a surprisingly helpful answer, and I had high expectations!
This is updating me somewhat towards doing more blog posts of the sort that I've been doing. As it happens, I have a draft of one that is very much Category 3, let me know if you are interested in giving comments!
Your sense of why we disagree is pretty accurate, I think. The only thing I'd add is that I do think we should update downwards on low-end compute scenarios because of market efficiency considerations, just not as strongly as you perhaps, and moreover I also think that we should update upwards for various reasons (the surprising recent sucesses of deep learning, the fact that big corporations are investing heavily-by-historical-standards in AI, the fact that various experts think they are close to achieving AGI) and the upwards update mostly cancels out the downwards update IMO.
Hi Ajeya! I"m a huge fan of your timelines report, it's by far the best thing out there on the topic as far as I know. Whenever people ask me to explain my timelines, I say "It's like Ajeya's, except..."
My question is, how important do you think it is for someone like me to do timelines research, compared to other kinds of research (e.g. takeoff speeds, alignment, acausal trade...)
I sometimes think that even if I managed to convince everyone to shift from median 2050 to median 2032 (an obviously unlikely scenario!), it still wouldn't matter much because people's decisions about what to work on are mostly driven by considerations of tractability, neglectedness, personal fit, importance, etc. and even that timelines difference would be a relatively minor consideration. On the other hand, intuitively it does feel like the difference between 2050 and 2032 is a big deal and that people who believe one when the other is true will probably make big strategic mistakes.
Bonus question: Murphyjitsu: Conditional on TAI being built in 2025, what happened? (i.e. how was it built, what parts of your model were wrong, what do the next 5 years look like, what do the 5 years after 2025 look like?)
Well said. I agree that that is a path to impact for the sort of work QRI is doing, it just seems lower-priority to me than other things like working on AI alignment or AI governance. Not to mention the tractability / neglectedness concerns (philosophy is famously intractable, and there's an entire academic discipline for it already)
Is emotional valence a particularly confused and particularly high-leverage topic, and one that might plausibly be particularly conductive getting clarity on? I think it would be hard to argue in the negative on the first two questions. Resolving the third question might be harder, but I’d point to our outputs and increasing momentum. I.e. one can levy your skepticism on literally any cause, and I think we hold up excellently in a relative sense. We may have to jump to the object-level to say more.
I don't think I follow. Getting more clarity on emotional valence does not seem particularly high-leverage to me. What's the argument that it is?
To your second concern, I think a lot about AI and ‘order of operations’. ... But might there be path-dependencies here such that the best futures happen if we gain more clarity on consciousness, emotional valence, the human nervous system, the nature of human preferences, and so on, before we reach certain critical thresholds in superintelligence development and capacity? Also — certainly.
Certainly? I'm much less sure. I actually used to think something like this; in particular, I thought that if we didn't program our AI to be good at philosophy, it would come to some wrong philosophical view about what consciousness is (e.g. physicalism, which I think is probably wrong) and then kill us all while thinking it was doing us a favor by uploading us (for example).
But now I think that programming our AI to be good at philosophy should be tackled directly, rather than indirectly by first solving philosophical problems ourselves and then programming the AI to know the solutions. For one thing, it's really hard to solve millenia-old philosophical problems in a decade or two. For another, there are many such problems to solve. Finally, our AI safety schemes probably won't involve feeding answers into the AI, so much as trying to get the AI to learn our reasoning methods and so forth, e.g. by imitating us.
Widening the lens a bit, qualia research is many things, and one of these things is an investment in the human-improvement ecosystem, which I think is a lot harder to invest effectively in (yet also arguably more default-safe) than the AI improvement ecosystem. Another ‘thing’ qualia research can be thought of as being is an investment in Schelling point exploration, and this is a particularly valuable thing for AI coordination.
I don't buy these claims yet. I guess I buy that qualia research might help improve humanity, but so would a lot of other things, e.g. exercise and nutrition. As for the Schelling point exploration thing, what does that mean in this context?
I’m confident that, even if we grant that the majority of humanity's future trajectory will be determined by AGI trajectory — which seems plausible to me — I think it’s also reasonable to argue that qualia research is one of the highest-leverage areas for positively influencing AGI trajectory and/or the overall AGI safety landscape.
Thanks for this detailed and well-written report! As a philosopher (and fan of the cyberpunk aesthetic :) ) your project sounds really interesting and exciting to me. I hope I get to meet you one day and learn more. However, I currently don't see the case for prioritising your project:
Isn’t it perplexing that we’re trying to reduce the amount of suffering and increase the amount of happiness in the world, yet we don’t have a precise definition for either suffering or happiness?
Until we can talk about these things objectively, let alone measure and quantify them reliably, we’ll always be standing in murky water.
It seems like you could make this argument about pretty much any major philosophical question, e.g. "We're trying to reduce the amount of suffering and increase the amount of happiness in the world, yet we don't have a precise definition of the world, or of we, or of trying, and we haven't rigorously established that this is what we should be doing anyway, and what does should mean anyway?"
Meanwhile, here's my argument for why QRI's project shouldn't be prioritized:
--Crazy AI stuff will probably be happening in the next few decades, and if it doesn't go well, the impact of QRI's research will be (relatively) small or even negative. --If it does go well, QRI's impact will still be small, because the sort of research QRI is doing would have been done anyway after AI stuff goes well. If other people don't do it, the current QRI researchers could do it, and probably do it even better thanks to advanced AI assistance.
Thanks! Makes sense. (To be clear, I wasn't saying that tight control by a single political faction would be a good thing... only that it would fix the polarization problem.) I think the Civil War era was probably more polarized than today, but that's not very comforting given what happened then. Ideally we'd be able to point to an era with greater-than-today polarization that didn't lead to mass bloodshed. I don't know much about the Jefferson-Adams thing but I'd be surprised if it was as bad as today.
For personal fit stuff: I agree that for intellectual work, personal fit is very important. It's just that I have discovered, almost by accident, that I have more personal fit than I realized for things I wasn't trained in. (You may have made a similar discovery?) Had I prioritized personal fit less early on, I would have explored more. I still wonder what sorts of things I could be doing by now if I had tried to reskill instead of continuing in philosophy. Yeah, maybe I would have discovered that I didn't like it and gone back to philosophy, but maybe I would have discovered that I loved it. I guess this isn't against prioritizing personal fit per se, but against how past-me interpreted the advice to prioritize personal fit.
For engaging with people outside EA: I went to a philosophy PhD program and climbed the conventional academic hierarchy for a few years. I learned a bunch of useful stuff, but I also learned a bunch of useless stuff, and a bunch of stuff which is useful but plausibly not as useful as what I would have learned working for an EA org. When I look back on what I accomplished over the last five years, almost all of the best stuff seems to be things I did on the side, extracurricular from my academic work. (e.g. doing internships at CEA etc.) I also made a bunch of friends outside EA, which I agree is nice in several ways (e.g. the ones you mention) but to my dismay I found it really hard to get people to lift a finger in the direction of helping the world, even if I could intellectually convince them that e.g. AI risk is worth taking seriously, or that the critiques and stereotypes of EA they heard were incorrect. As a counterpoint, I did have interactions with several dozen people probably, and maybe I caused more positive change than I could see, especially since the world's not over yet and there is still time for the effects of my conversations to grow. Still though: I missed out on several year's worth of EA work and learning by going to grad school; that's a high opportunity cost. As for learning things myself: I heard a lot of critiques of EA, learned a lot about other perspectives on the world, etc. but ultimately I don't think I would be any worse off in this regard if I had just gone into an EA org for the past five years instead of grad school.
Thanks for this! I think my own experience has led to different lessons in some cases (e.g. I think I should have prioritised personal fit less and engaged less with people outside the EA community), but I nevertheless very much approve of this sort of public reflection.
going up against consensus in a deliberative body, be that my Committee or the General Assembly, and convincing my fellow Representatives to reverse course and vote the opposite way they had intended.
It's great to hear that this is not only possible but possible for one person to achieve multiple times in two years. Do you think you were able to do it significantly more often than the average representative? (e.g. because the average representative cares more about conforming to the pack than you and so tries to do this less often?)
What's your model for what's driving political polarization in the US? My model is basically that the internet + a few other technologies is allowing people to sort themselves into filter bubbles, and also toxoplasma of rage stuff is making the bubbles fight each other instead of ignore each other. On this model, things aren't going to get significantly less polarized until our media is tightly controlled by a single political faction.
I think I basically agree with you here. I don't have much to say by way of positive proposals, but maybe this blog post is helpful: http://mindingourway.com/the-value-of-a-life/ Basically, the value of a life should be measured in stars (or something even bigger!), even though the price of a life should be measured in dollars or work-hours. Thus if you do something impactful but less-than-maximally impactful, you should still feel proud, because e.g. the life you contributed to saving is immensely, astronomically valuable.
Interesting post! I'm excited to see more thinking about memetics, for reasons sketched here and here. Some thoughts:
--In my words, what you've done is point out that approximate-consequentialism + large-scale preferences is an attractor. People with small-scale preferences (such as just caring about what happens to their village, or their family, or themselves, or a particular business) don't have much to gain by spreading their memeplex to others. And people who aren't anywhere close to being consequentialists might intellectually agree that spreading their memeplex to others would result in their preferences being satisfied to a greater extent, but this isn't particularly likely to motivate them to do it. But people who are approximately consequentialist and who have large-scale preferences will be strongly motivated to spread their memeplex, because doing so is a convergent instrumental goal for people with large-scale preferences. Does this seem like a fair summary to you?
--I guess it leaves out the "truth-seeking" bit, maybe that should be bundled up with consequentialism. But I think that's not super necessary. It's not hard for people to come to believe that spreading their memeplex will be good by their lights; that is, you don't have to be a rationalist to come to believe this. It's pretty obvious.
--I think it's not obvious this is the strongest attractor, in a world full of memetic attractors. Most major religions are memetic attractors, and they often rely on things other than convergent instrumental goals to motivate their members to spread the memeplex. And they've been extremely successful, far more so than "truth-seeking self-aware altruistic decision-making," even though that memeplex has been around for millenia too.
--On the other hand, maybe truth-seeking self-aware altruistic decision-making has actually been even more successful than every major religion and ideology, we just don't realize it because as a result of being truth-seeking, the memplex morphs constantly, and thus isn't recognized as a single memplex. (By contrast with religions and ideologies which enforce conformity and dogma and thus maintain obvious continuity over many years and much territory.)
Comment by kokotajlod on [deleted post]
Mmm, good point. Perhaps the way to salvage the concept of a singleton is to define it as the opposite of moloch, i.e. a future is ruled by a singleton to the extent that it doesn't have moloch-like forces causing drift towards outcomes that nobody wants, money being left on the table, etc. Or maybe we could just say a singleton is where outcomes are on or close to the pareto frontier. Idk.
Agreed on all counts except that I like the concept of a singleton. I'd be interested to hear why you don't, if you wish to discuss it.
Comment by kokotajlod on [deleted post]
Thanks! How about these:
"Effective altruists believe you'll 1000x more good if you prioritize impact" "Effective altruists believe you'll 1000x more good if you actually try to do the most good you can." "Effective altruists believe you'll do 1000x more good if you shut up and calculate"
"Effective altruists believe you'll do 1000x more good if you take cost-effectiveness calculations seriously"
I think the third one is my favorite, haha, but the second one is what I think would actually be best.
Thanks! Yes, I think stock in AI companies is a significantly better metric than world GDP. I still think it's not a great metric, because some of the arguments/reasons I gave above still apply. But others don't.
I think forecasting platforms are definitely something to take seriously. I reserve the right to disagree with them sometimes though. :)
As for additional stuff we care about regarding takeoff speeds... Yeah, your comment and others are increasingly convincing me that my list wasn't exhaustive. There are a bunch of variables we care about, and there's lots of intellectual work to be done thinking about how they correlate and interact.
Am I right in thinking the conclusion is something like this:
If we get a singleton on Earth, which then has a monopoly on space colonization forever, they do the Armstrong-Sandberg method and colonize the whole universe extremely efficiently. If instead we have some sort of competitive multipolar scenario, where Moloch reigns, most of the cosmic commons get burnt up in competition between probes on the hardscrapple frontier?
If so, that seems like a reasonably big deal. It's an argument that we should try to avoid scenarios in which powerful space tech is developed prior to a singleton forming. Perhaps this means we should hope for a fast takeoff rather than a slow takeoff, for example.
Comment by kokotajlod on [deleted post]
Here's what I wish the low-resolution version was:
"Effective altruists believe that if you actually try to do as much good as you can with your money or time, you'll do thousands of times more good than if you donate in the usual ways. They also think that you should do this."
OK, thanks. I'm not sure how you calculated that but I'll take your word for it. My hypothetical observer is seeming pretty silly then -- I guess I had been thinking that the growth prior to 1700 was fast but not much faster than it had been at various times in the past, and in fact much slower than it had been in 1350 (I had discounted that, but if we don't, then that supports my point) so a hypothetical observer would be licensed to discount the growth prior to 1700 as maybe just catch-up + noise. But then by the time the data for 1700 comes in, it's clear a fundamental change has happened. I guess the modern-day parallel would be if a pandemic or economic crisis depresses growth for a bit, and then there's a sustained period of growth afterwards in which the economy doubles in 7 years, and there's all sorts of new technology involved but it's still respectable for economists to say it's just catch-up growth + noise, at least until year 5 or so of the 7-year doubling. Is this fair?
There definitely wasn't 0.14% growth over 5000 years. But according to my data there was 12% in 700, 0.23% in 900, 11% in 1000 and 1100, 47% in 1350, and 21% in 1400. So 14% fits right in; 14% over a 500-year period is indeed more impressive, but not that impressive when there are multiple 100-year periods with higher growth than that worldwide (and thus presumably longer periods with higher growth, in cherry-picked locations around the world)
Anyhow, the important thing is how much we disagree, and maybe it's not much. I certainly think the scenario you sketch is plausible, but I think "faster" scenarios, and scenarios with more of a disconnect between GWP and PONR, are also plausible. Thanks to you I am updating towards thinking the historical case of IR is less support for that second bit than I thought.
Thanks for the reply -- Yeah, I totally agree that GDP of the most advanced countries is a better metric than GWP, since presumably GDP will accelerate first in a few countries before it accelerates in the world as a whole. I think most of the points made in my post still work, however, even against the more reasonable metric of GDP-of-the-most-technologically-advanced-country.
Moreover, I think even the point you were specifically critiquing still stands: If AI will be like the Industrial Revolution but faster, then crazy stuff will be happening pretty early on in the curve.
Here's the data I got from Wikipedia a while back on world GDP growth rates. Year is the column on the left, annual growth rate (extrapolated) is in the column on the right.
On this data at least, 1700 is the first time an observer would say "OK yeah maybe we are transitioning to a new faster growth mode" (assuming you discount 1350 as I do as an artefact of recovering from various disasters). Moreover, it seems to contradict your claim that 0.14% growth was already high by historical standards. (Your data was for population whereas mine is for GWP, maybe that accounts for the discrepancy.)
EDIT: Also, I picked 1700 as precisely the time when "Things seem to be blowing up" first became true. My point was that the point of no return was already past by then.
I made it up, but it's inspired by reading this short story. (I have a stash of quotes I find inspirational, and sometimes I make up stuff to put in the stash. Having to come up with wedding vows was part of my motivation.)
I've seen graduation and commencement speeches for about four different universities. I think every university presents itself as helping its students change the world. Your proposal is to make this even more explicit than it already is.
I don't think jadedness really captures most of what's going on. I think people correctly realize that the world is more complicated and confusing and hard to change than they thought, and full of grey areas they don't understand rather than black and white, good guys and bad guys, etc. But to say that jadedness stopped them from trying to change the world feels off to me; rather, they naively thought it would be easy and simple and then got confused and lost interest when they realized it wasn't.
If they were actually trying to change the world -- if they were actually strongly motivated to make the world a better place, etc. -- the stuff they learn in college wouldn't stop them.