Posts
Comments
Thanks for adding comments to it!
By the way, someone wrote this Google doc in 2019 on "Stock Market prediction of transformative technology". I haven't taken a look at it in years, and neither has the author, so understandably enough, they're asking to remain nameless to avoid possible embarrassment. But hopefully it's at least somewhat relevant, in case anyone's interested.
Awesome, good to hear on all counts!
Thanks for writing this! I think market data can be a valuable source of information about the probability of various AI scenarios--along with other approaches, like forecasting tournaments, since each has its own strengths and weaknesses. I think it’s a pity that relatively little has yet been written on extracting information about AI timelines from market data, and I’m glad that this post has brought the idea to people’s attention and demonstrated that it’s possible to make at least some progress.
That said, there is one broad limitation to this analysis that hasn’t gotten quite as much attention so far as I think it deserves. (Basil: yes, this is the thing we discussed last summer….) This is that low real, risk-free interest rates are compatible with the belief
1) that there will be no AI-driven growth explosion,
as you discuss--but also with some AI-growth-explosion-compatible beliefs investors might have, including
2) that future growth could well be very fast or very slow, and
3) that growth will be fast but marginal utility in consumption will nevertheless stay high, because AI will give us such mindblowing new things to spend on (my “new products” hobby-horse).
So it seems impossible to put any upper bound (below 100%) on the probability people are assigning to near-term explosive growth purely by looking at real, risk-free interest rates.
To infer that investors believe (1), one of course has to think hard about all the alternatives (including but not limited to (2) and (3)) and rule them out. But (if I’m not mistaken) all you do along these lines is to partly rule out (2), by exploring the implications of putting a yearly probability on the economy permanently stagnating. I found that helpful. As you observe, merely (though I understand that you don't see it as “merely”!) introducing a 20% chance of stagnation by 2053 is enough to mostly offset the interest rate increases produced by an 80% chance of Cotra AI timelines. You don’t currently incorporate any negative-growth scenarios, but even a small chance of negative growth seems like it should be enough to fully offset said interest rate increase. This is because of the asymmetry produced by diminishing marginal utility: the marginal utility of an extra dollar saved can only fall to zero, if you turn out to be very rich in the future, whereas it can rise arbitrarily high if you turn out to be very poor. (You note this when you say “the real interest rate reflects the expected future economic growth rate, where importantly the expectation is taken over the risk-neutral measure”, but I think the departure from caring about what we would normally call the expected growth rate is important and kind of obscured here.)
This seems especially relevant given that what investors should be expected to care about is the expected growth rate of their own future consumption, rather than of GDP. Even if they’re certain that AI is coming and bound to accelerate GDP growth, they could worry that it stands some chance of making a small handful of people rich and themselves poor. You write that “truly transformative AI leading to 30%+ economy-wide growth… would not be possible without having economy-wide benefits”, but this is not so clear to me. You might think that’s crazy, but given that I don’t, presumably some other investors don’t.
Anyway: this is all to say that I’m skeptical of inferring much from risk-free interest rates alone. This doesn’t mean we can’t draw inferences from market data, though! For one thing, on the hypothesis that investors believe “(2)”, we would probably expect to see the “insurance value” of bonds, and thus the equity premium, rising over time (as we do, albeit weakly). For another thing, one can presumably test how the market reacts to AI news. I’m certainly interested to see any further work people do in this direction.
Briefly, to reiterate / expand on a point made by a few other comments: I think the title is somewhat misleading, because it conflates expecting aligned AGI with expecting high growth. People could be expecting aligned AGI but (correctly or incorrectly) not expecting it to dramatically raise the growth rate.
This divergence in expectations isn’t just a technical possibility; a survey of economists attending the NBER conference on the economics of AI last year revealed that most of them do not expect AGI, when it arrives, to dramatically raise the growth rate. The survey should be out in a few weeks, and I’ll try to remember to link to it here when it is.
Perhaps just a technicality, but: to satisfy the transversality condition, an infinitely lived agent has to have a discount rate of at least r (1-σ). So if σ >1—i.e. if the utility function is more concave than log—then the time preference rate can be at least a bit negative.
Hey, really glad you liked it so much! And thank you for emphasizing that people should consider applying even if they worry they might not fit in--I think this content should be interesting and useful to lots of people outside the small bubbles we're currently drawing from.
Wow, thank you so much, Maxime!
Thanks Bruce! Definitely agreed that it was an amazing crowd : )
Thanks James, really glad to hear you feel you got a lot out of it (including after a few months' reflection)!
Thanks David!
I’m an econ grad student and I’ve thought a bit about it. Want to pick a time to chat? https://calendly.com/pawtrammell
Thanks for writing this! For all the discussion that population growth/decline has gotten recently in EA(/-adjacent) circles, as a potential top cause area--to the point of PWI being founded and Elon Musk going on about it--there hasn't been much in-depth assessment of the case for it, and I think this goes a fair way toward filling that gap.
One comment: you write that "[f]or a rebound [in population growth] to happen, we would only need a single human group satisfying the following two conditions: long-run above-replacement fertility, and a high enough “retention rate”, that is, a large enough fraction of the descendants of this group continues to belong to the group." I think that's a good and underappreciated point, but I also think it's a bit weaker than it sounds at first, since something of a converse also holds. I.e. for permanent population decline to happen, we would only need a single human group satisfying the following two conditions: long-run below-replacement fertility, and a high enough "attraction rate", that is, a large enough fraction of people born outside the group continue to join the group. "Western civilization" has arguably been such a group for the last few generations, and it's not obvious to me that it (or its "descendants") won't continue to be for a very long time.
Charlotte sort of already addresses this, but just to clarify/emphasize: the fact that prehistoric Australia, with its low population, faced long-term economic and technological (near-)stagnation doesn't imply that adding a person to prehistoric Australia would have increased its growth rate by less than adding a person to an interconnected world of 8 billion.
The historical data on different regions' population sizes and growth rates is entirely compatible with the view that adding a person to prehistoric Australia would have increased its growth rate by more than adding a person to the world today, as implied by a more standard growth model.
Cool, thanks for thinking this through!
This is super speculative of course, but if the future involves competition between different civilizations / value systems, do you think having to devote say 96% (i.e. 24/25) of a civilization's storage capacity to redundancy would significantly weaken its fitness? I guess it would depend on what fraction of total resources are spent on information storage...?
Also, by the same token, even if there is a "singleton" at some relatively early time, mightn't it prefer to take on a non-negligible risk of value drift later in time if it means being able to, say, 10x its effective storage capacity in the meantime?
(I know your 24/25 was a conservative estimate in some ways; on the other hand it only addresses the first billion years, which is arguably only a small fraction of the possible future, so hopefully it's not too biased a number to anchor on!)
Thanks, great post!
You say that "using digital error correction, it would be extremely unlikely that errors would be introduced even across millions or billions of years. (See section 4.2.) " But that's not entirely obvious to me from section 4.2. I understand that error correction is qualitatively very efficient, as you say, in that the probability of an error being introduced per unit time can be made as low as you like at the cost of only making the string of bits a certain small-seeming multiple longer (and my understanding is that multiple shrinks the longer the original string was?). But for any multiple, there's some period of time long enough that the probability of faithfully maintaining some string of bits for that long is low. Is there any chance you could offer an estimate of, say, how much longer you'd have to make a petabyte in order to get the probability of an error over a billion years below 1%?
Glad to hear you find the topics interesting!
First, I should emphasize that it's not designed exclusively for econ grad students. The opening few days try to introduce enough of the relevant background material that mathematically-minded people of any background can follow the rest of it. As you'll have seen, many of the attendees were pre-grad-school, and 18% were undergrads. My impression from the feedback forms and from the in-person experience is that some of the undergrads did struggle, unfortunately, but others got a lot out of it. Check out the materials, and if you think you'd be a good fit, you're more than welcome to apply for next year.
That said, I agree that a more undergrad-focused version of the program would be valuable. I don't have plans to make one myself in the near future, but I would like to at some point. In the meantime, if anyone reading this wants to do it, please feel free to reach out!
Thanks! And thanks for presenting!!
Thanks for this!
My understanding is that some assets claimed to have a significant illiquidity premium don’t really, including (as you mention) private equity and real estate, but some do, e.g. timber farms: on account of the asymmetric information, no one wants to buy it without prospecting it to see how the trees are coming along. Do you disagree that low-DR investors should disproportionately buy timber farms (at least if they’re rich enough to afford the transaction costs)?
Also, just to clarify my point about 100-year leases from Appendix E: I wasn’t recommending that low-DR investors actually do this! It was just supposed to be an illustration of why patient investors should be expected to own a larger fraction of the world over time.
The numbers I cited on 100-year leases came from Giglio et al. (2015) (published version here https://academic.oup.com/qje/article/130/1/1/2337985 , accessible draft here http://piketty.pse.ens.fr/files/Giglioetal2013.pdf).
Haha okay, thank you! I agree that it’ll be great if clear examples of impact like this inspire more people to do work along these lines. And I appreciate that aiming for clear impact is valuable for researchers in general for making sure our claims of impact aren’t just empty stories.
FWIW though, I also think it could be misleading to base our judgment of the impact of some research too much on particular projects with clear and immediate connections to the research—especially in philosophy, since it’s further “upstream”. As this 80k article argues, most philosophers have basically no impact, but some, like Locke, Marx, and Singer, seem to have had huge impact, most of it very indirect. In some cases (Marx especially I guess) the main impacts have even come from people reading their ideas long after they died.
Anyway, happy to celebrate clear impact (including my own!), just want to emphasize that I don't think impact always has to be clear. :)
I expect that different people at GPI have somewhat different goals for their own research, and that this varies a fair bit between philosophy and economics. But for my part,
- my primarily goal is to do research that philanthropists find useful, and
- my secondary goal is to do research that persuades other academics to see certain important questions in a more "EA" way, and to adjust their own curricula and research accordingly.
On the first point—and apologies if this sounds self-congratulatory or something, but I'm just providing the examples of GPI's impact that I happen to have had a hand in, in case they're helpful!—I'm (naturally) excited that my work on the allocation of philanthropic spending over time motivated Founders Pledge to launch the Patient Philanthropy Fund. I'm also glad that a few larger philanthropists have told me that it has had at least some impact on how they think about the question of how they should distribute their giving over time.
On the second point, I don't really expect to be influencing econ professors much yet since I'm still just a PhD student, but my literature review on economic growth under AI will be used in a Coursera course on the economics of AI. (To illustrate what I have in mind of what's possible, though, the philosophers already seem to have had a fair bit of success influencing curricula: professors at Yale and UMich are now offering whole courses on longtermism, largely drawing on GPI papers.)
I am not focused on attempting to change policy.
Hah, sorry to hear that! But thanks for sharing--good to have yet more evidence on this front...!
Right—the primary audience is people who already have a fair bit of background in economics.
There are now questions on Metaculus about whether this will pass:
https://www.metaculus.com/questions/8663/us-to-make-patient-philanthropy-harder-soon/
https://www.metaculus.com/questions/8664/patient-philanthropy-harder-in-the-us-by-30/
I am, thanks
Cool! I was thinking that this course would be a sort of early-stage / first-pass attempt at a curriculum that could eventually generate a textbook (and/or other materials) if it goes well and is repeated a few times, just as so many other textbooks have begun as lecture notes. But if you'd be willing to make something online / easier-to-update sooner, that could be useful. The slides and so on won't be done for quite a while, but I'll send them to you when they are.
Yup, I'll post the syllabus and slides and so on!
I'll also probably record the lectures, but probably not make them available except to the attendees, so they feel more comfortable asking questions. But if a lecture goes well, I might later use it as a template for a more polished/accessible video that is publicly available. (Some of the topics already have good lectures for online as well, though; in those cases I'd probably just link to those.)
Glad to hear you might be interested!
Thanks for pointing this out. It's tough, because (a) as GrueEmerald notes below, at least some European schools end later, and (b) it will be easier to provide accommodation in Oxford once the Oxford spring term is over (e.g. I was thinking of just renting space in one of the colleges). Once the application form is up*, I might include a When2Meet-type thing so people can put exactly what weeks they expect to be free through the summer.
*If this goes ahead; but there have been a lot of expressions of interest so far, so it probably will!
Sure. Those particular papers rely on a mathematical trick that only lets you work out how much a society should be willing to pay to avoid proportional losses in consumption. It turns out to be different from what to do in the x-risk case in lots of important ways, and the trick is not generalizable in those ways. But because the papers seem so close to being x-risk-relevant, I know of like half a dozen EA econ students (including me) who have tried extending them at some point before giving up…
I’m aware of at least a few other “common EA econ theorist dead ends” of this sort, and I’ll try making a list, along something written about each of them. When this and the rest of the course material is done, I’ll post it.
Good to know, thanks!
Video recordings are among the "more polished and scalable educational materials" I was thinking might come out of this; i.e. to some extent the course lectures would serve as a trial run for any such videos. That wouldn't be for a year or so, I'm afraid. But if it happens, I'll make sure to get a good attached mike, and if I can't get my hands on one elsewhere I'll keep you in mind. : )
Thanks! A lot of good points here.
Re 1: if I'm understanding you right, this would just lower the interest rate from r to r - capital 'depreciation rate'. So it wouldn't change any of the qualitative conclusions, except that it would make it more plausible that the EA movement (or any particular movement) is, for modeling purposes, "impatient". But cool, that's an important point. And particularly relevant these days; my understanding is that a lot of Will's(/etc) excitement around finding megaprojects ASAP is driven by the sense that if we don't, some of the money will wander off.
Re 2: another good point. In this case I just think it would make the big qualitative conclusion hold even more strongly--no need to earn to give because money is even easier to come by, relative to labor, than the model suggests. But maybe it would be worth working through it after adding an explicit "wealth recruitment" function, to make sure there are no surprises.
Re 3: I agree, but I suspect--perhaps pessimistically--that the asymptotics of this model (if it's roughly accurate at all) bite a long time before EA wealth is a large enough fraction of global capital to push down the interest rate! Indeed, I don't think it's crazy to think they're already biting. Presumably the thing to do if you actually got to that point would be to start allocating more resources to R&D, to raise labor productivity and thus the return to capital. There are many ways I'd want to make the model more realistic before worrying about the constraints you run into when you start owning continents (a scenario for which there would presumably be plenty of time to prepare...!); but as noted, one of the extensions I'm hoping gets done before too long is to make (at least certain kinds of) R&D endogenous. So hopefully that would be at least somewhat relevant.
Thanks! I agree that this might be another pretty important consideration, though I'd want to think a bit about how to model it in a way that feels relatively realistic and non-arbitrary.
E.g. maybe we should say people start out with a prior on the effectiveness of a movement at getting good things done, and instead of just being deterministically "recruited", they decide whether to contribute their labor and/or capital to a movement partly on the basis of their evaluation of its effectiveness, after updating on the basis of its track record.
Good point, thanks!
Good question! Yes, an ideas constraint absolutely could make sense.
My current favorite way to capture that possibility would be to model funding opportunities like consumer products as I do here. Pouring more capital and labor into existing funding opportunities might just bring you to an upper bound of impact, whereas thinking of new funding opportunities would raise the upper bound.
This is also one of the extensions I'm hoping to add to this model before too long. If you or anyone else reading this would be interested in working on that, especially if you have an econ background, let me know!
Thanks!
Nice to see this coming along! How many visitors has utilitarianism.net been getting?
Thanks!
Sorry, what’s REI work?
I think this is a valuable contribution—thanks for writing it! Among other things, it demonstrates that conclusions about when to give are highly sensitive to how we model value drift.
In my own work on the timing of giving, I’ve been thinking about value drift as a simple increase to the discount rate: each year philanthropists (or their heirs) face some x% chance of running off with the money and spending it on worthless things. So if the discount rate would have been d% without any value drift risk, it just rises to (d+x)% given the value drift risk. If the learning that will take place over the next year (and other reasons to wait, e.g. a positive interest rate) outweigh this (d+x)% (plus the other reasons why resources will be less valuable next year), it’s better to wait. But here we see that, if values definitely change a little each year, it might be best to spend much more quickly than if (as I’ve been assuming) they probably don’t change at all but might change a lot, since in the former case, holding onto resources allows for a kind of slippery slope in which each year you change your judgments about whether or not to defer to the next year. So I’m really glad this was written and I look forward to thinking about it more.
One comment on the thesis itself: I think it’s a bit confusing at the beginning, where it says that decision-makers face a tradeoff between “what is objectively known about the world and what they personally believe is true.” The tradeoff they face is between acquiring information and maintaining fidelity to their current preferences, not to their current beliefs. The rest of the thesis is consistent with framing the problem as a information-vs.-preference-fidelity tradeoff, so I think this wording is just a holdover from a previous version of the thesis which framed things differently. But (Max) let me know if I’m missing something.
Sorry, no, that's clear! I should have noted that you say that too.
The point I wanted to make is that your reason for saving as an urgent longtermist isn't necessarily something like "we're already making use of all these urgent opportunities now, so might as well build up a buffer in case the money is gone later". You could just think that now isn't a particularly promising time to spend, period, but that there will be promising opportunities later this century, and still be classified as an urgent longtermist.
That is, an urgent longtermist could have stereotypically "patient longtermist" beliefs about the quality of direct-impact spending opportunities available in December 2020.
Thanks! I was going to write an EA Forum post at some point also trying to clarify the relationship between the debate over "patient vs urgent longtermism" and the debate over giving now vs later, and I agree that it's not as straightforward as people sometimes think.
On the one hand, as you point out, one could be a "patient longtermist" but still think that there are capacity-building sorts of spending opportunities worth funding now.
But I'd also argue that, if urgent longtermism is defined roughly as the view that there will be critical junctures in the next few decades, as you put it, then an urgent longtermist could still think it's worth investing now, so that more money will be spent near those junctures in a few decades. Investing to give in, say, thirty years is still pretty unusual behavior, at least for small donors, but totally compatible with "urgent longtermism" / "hinge of history"-type views as they're usually defined.
Sure, I see how making people more patient has more-or-less symmetric effects on risks from arms race scenarios. But this is essentially separate from the global public goods issue, which you also seem to consider important (if I'm understanding your original point about "even the largest nation-states being only a small fraction of the world"), which is in turn separate from the intergenerational public goods issue (which was at the top of my own list).
I was putting arms race dynamics lower than the other two on my list of likely reasons for existential catastrophe. E.g. runaway climate change worries me a bit more than nuclear war; and mundane, profit-motivated tolerance for mistakes in AI or biotech (both within firms and at the regulatory level) worry me a bit more than the prospect of technological arms races.
That's not a very firm belief on my part--I could easily be convinced that arms races should rank higher than the mundane, profit-motivated carelessness. But I'd be surprised if the latter were approximately none of the problem.
I agree that the world underinvests in x-risk reduction (/overspends on activities that increase x-risk as a side effect) for all kinds of reasons. My impression would be that the two most important reasons for the underinvestment are that existential safety is a public good on two fronts:
- long-term (but people just care about the short term, and coordination with future generations is impossible), and
- global (but governments just care about their own countries, and we don't do global coordination well).
So I definitely agree that it's important that there are many actors in the world who aren't coordinating well, and that accounting for this would be an important next step.
But my intuition is that the first point is substantially more important than the second, and so the model assumes away much but not close to all of the problem. If the US cared about the rest of the world equally, that would multiply its willingness to pay for an increment of x-risk mitigation by maybe an order of magnitude. But if it had zero pure time preference but still just cared about what happened within its borders (or something), that would seem to multiply the WTP by many orders of magnitude.
Thanks! No need to inflict another recording of my voice on the world for now, I think, but glad to hear you like how the project coming.
The post cites the Stern discussion to make the point that (non-discounted) utilitarian policymakers would implement more investment, but to my mind that’s quite different from the point that absent cosmically exceptional short-term impact the patient longtermist consequentialist would save. Utilitarian policymakers might implement more redistribution too. Given policymakers as they are, we’re still left with the question of how utilitarian philanthropists with their fixed budgets should prioritize between filling the redistribution gap and filling the investment gap.
In any event, if you/Owen have any more unpublished pre-2015 insights from private correspondence, please consider posting them, so those of us who weren’t there don’t have to go through the bother of rediscovering them. : )
Thanks! I agree that people in EA—including Christian, Leopold, and myself—have done a fair bit of theory/modeling work at this point which would benefit from relevant empirical work. I don’t think this is what either of the current new economists will engage in anytime soon, unfortunately. But I don’t think it would be outside a GPI economist’s remit, especially once we’ve grown.
Sorry--maybe I’m being blind, but I’m not seeing what citation you’d be referring to in that blog post. Where should I be looking?
Thanks, I definitely agree that there should be more prioritization research. (I work at GPI, so maybe that’s predictable.) And I agree that for all the EA talk about how important it is, there's surprisingly little really being done.
One point I'd like to raise, though: I don’t know what you’re looking for exactly, but my impression is that good prioritization research will in general not resemble what EA people usually have in mind when they talk about “cause prioritization”. So when putting together an overview like this, one might overlook some of even what little prioritization research is being done.
In my experience, people usually imagine a process of explicitly listing causes, thinking through and evaluating the consequences of working in each of them, and then ranking the results (kind of like GiveWell does with global poverty charities). I expect that the main reason more of this doesn’t exist is that, when people try to start doing this, they typically conclude it isn’t actually the most helpful way to shed light on which cause EA actors should focus on.
I think that, more often than not, a more helpful way to go about prioritizing is to build a model of the world, just rich enough to represent all the levers between which you’re considering and the ways you expect them to interact, and then to see how much better the world gets when you divide your resources among the levers this way or that. By analogy, a “naïve” government’s approach to prioritizing between, say, increasing this year’s GDP and decreasing this year’s carbon emissions would be to try to account explicitly for the consequences of each and to compare them. Taking the lowering emissions side, this will produce a tangled web of positive and negative consequences, which interact heavily both with each other and with the consequences of increasing GDP: it will mean
- less consumption this year,
- less climate damage next year,
- less accumulated capital next year with which to mitigate climate damage,
- more of an incentive for people next year to allow more emissions,
- more predictable weather and therefore easier production next year,
- …but this might mean more (or less) emissions next year,
- …and so on.
It quickly becomes clear that finishing the list and estimating all its items is hopeless. So what people do instead is write down an “integrated assessment model”. What the IAM is ultimately modeling, albeit in very low resolution, is the whole world, with governments, individuals, and various economic and environmental moving parts behaving in a way that straightforwardly gives rise to the web of interactions that would appear on that infinitely long list. Then, if you’re, say, a government in 2020, you just solve for the policy—the level of the carbon cap, the level of green energy subsidization, and whatever else the model allows you to consider—that maximizes your objective function, whatever that may be. What comes out of the model will be sensitive to the construction of the model, of course, and so may not be very informative. But I'd say it will be at least as informative as an attempt to do something that looks more like what people sometimes seem to mean by cause prioritization.
If the project of “writing down stylized models of the world and solving for the optimal thing for EAs to do in them” counts as cause prioritization, I’d say two projects I’ve had at least some hand in over the past year count: (at least sections 4 and 5.1 of) my own paper on patient philanthropy and (at least section 6.3 of) Leopold Aschenbrenner’s paper on existential risk and growth. Anyway, I don't mean to plug these projects in particular, I just want to make the case that they’re examples of a class of work that is being done to some extent and that should count as prioritization research.
…And examples of what GPI will hopefully soon be fostering more of, for whatever that’s worth! It’s all philosophy so far, I know, but my paper and Leo’s are going on the GPI website once they’re just a bit more polished. And we’ve just hired two econ postdocs I’m really excited about, so we’ll see what they come up with.
Hanson has advocated for investing for future giving, and I don't doubt he had this intuition in mind. But I'm actually not aware of any source in which he says that the condition under which zero-time-preference philanthropists should invest for future giving is that the interest rate incorporates beneficiaries' pure time preference. I only know that he's said that the relevant condition is when the interest rate is (a) positive or (b) higher than the growth rate. Do you have a particular source in mind?
Also, who made the "pure time preference in the interest rate means patient philanthropists should invest" point pre-Hanson? (Not trying to get credit for being the first to come up with this really basic idea, I just want to know whom to read/cite!)
That post just makes the claim that "all we really need are positive interest rates". My own point which you were referring to in the original comment is that, at least in the context of poverty alleviation (/increasing human consumption more generally), what we need is pure time preference incorporated into interest rates. This condition is neither necessary nor sufficient for positive interest rates.
Hanson's post then says something which sounds kind of like my point, namely that we can infer that it's better for us as philanthropists to invest than to spend if we see our beneficiaries doing some of both. But I could never figure out what he was saying exactly, or how it was compatible with the point he was trying to make that all we really need are positive interest rates.
Could you elaborate?