Posts

Ben Garfinkel's Shortform 2020-09-03T15:55:56.188Z · score: 5 (1 votes)
Does Economic History Point Toward a Singularity? 2020-09-02T12:48:57.328Z · score: 114 (49 votes)
AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher 2020-07-13T16:17:45.913Z · score: 87 (30 votes)
Ben Garfinkel: How sure are we about this AI stuff? 2019-02-09T19:17:31.671Z · score: 84 (43 votes)

Comments

Comment by ben-garfinkel on Does Economic History Point Toward a Singularity? · 2020-09-13T15:17:23.557Z · score: 3 (2 votes) · EA · GW

The world GDP growth rate also seems to have been increasing during the immediate lead-up to the Industrial Revolution, as well as during the following century, although the exact numbers are extremely uncertain. The growth rate most likely stabilized around the middle of the 20th century.

Comment by ben-garfinkel on Does Economic History Point Toward a Singularity? · 2020-09-13T10:28:41.779Z · score: 5 (3 votes) · EA · GW

The growth rate of output per person definitely has been roughly constant in developed countries (esp. the US) in the 20th century. In the doc, I'm instead talking about the growth rate of total output, globally, from about 1600 to 1950.

(So the summary may give the wrong impression. I ought to have suggested a tweak to make it clearer.)

Comment by ben-garfinkel on Does Economic History Point Toward a Singularity? · 2020-09-12T22:16:05.446Z · score: 13 (4 votes) · EA · GW

Addendum:

In the linked doc, I mainly contrast two different perspectives on the Industrial Revolution.

  • Stable Dynamics: The core dynamics of economic growth were stable between the Neolithic Revolution and the 20th century. Growth rates increased substantially around the Industrial Revolution, but this increase was nothing new. In fact, growth rates were generally increasing throughout this lengthy period (albeit in a stochastic fashion). The most likely cause for the upward trend in growth rates was rising population levels: larger populations could come up with larger numbers of new ideas for how to increase economic output.

  • Phase Transition: The dynamics of growth changed over the course of the Industrial Revolution. There was some barrier to growth that was removed, some tipping point that was reached, or some new feedback loop that was introduced. There was a relatively brief phase change from a slow-growth economy to a fast-growth economy. The causes of this phase transition are somewhat ambiguous.

In the doc, I essentially argue that existing data on long-run growth doesn’t support the “stable dynamics” perspective over the “phase transition” perspective. I think that more than anything else, due to data quality issues, we are in a state of empirical ignorance.

I don’t really say anything, though, about the other reasons people might have for finding one perspective more plausible than the other.[1] Since I personally lean toward the “phase change” perspective, despite its relative inelegance and vagueness, I thought it might also be useful for me to write up a more detailed comment explaining my sympathy for it.


Here, I think, are some points that count in favor of the phase change perspective.

1. So far as I can tell, most prominent economic historians favor the phase change perspective.

For example, here is Joel Mokyr describing his version of the phase change perspective (quote stitched together from two different sources):

The basic facts are not in dispute. The British Industrial Revolution of the late eighteenth century unleashed a phenomenon never before even remotely experienced by any society. Of course, innovation has taken place throughout history. Milestone breakthroughs in earlier times—such as water mills, the horse collar, and the printing press—can all be traced more or less, and their economic effects can be assessed. They appeared, often transformed an industry affected, but once incorporated, further progress slowed and sometimes stopped altogether. They did not trigger anything resembling sustained technological progress....

The early advances in the cotton industry, iron manufacturing, and steam power of the years after 1760 became in the nineteenth century a self-reinforcing cascade of innovation, one that is still very much with us today and seems to grow ever more pervasive and powerful. If economic growth before the Industrial Revolution, such as it was, was largely driven by trade, more effective markets and improved allocations of resources, growth in the modern era has been increasingly driven by the expansion of what was known in the age of Enlightenment as “useful knowledge” (A Culture of Growth, p. 3-4).

Technological modernity is created when the positive feedback from the two types of knowledge [practical knowledge and scientific knowledge] becomes self-reinforcing and autocatalytic. We could think of this as a phase transition in economic history, in which the old parameters no longer hold, and in which the system’s dynamics have been unalterably changed ("Long-Term Economic Growth and the History of Technology", p. 12).

And here’s Robert Allen telling another phase transition story (quotes stitched together from Global Economic History: A Very Short Introduction):

The greatest achievement of the Industrial Revolution was that the 18th-century inventions were not one-offs like the achievements of earlier centuries. Instead, the 18th-century inventions kicked off a continuing stream of innovations.

The explanation [for the Industrial Revolution] lies in Britain’s unique structure of wages and prices. Britain’s high-wage, cheap-energy economy made it profitable for British firms to invent and use the breakthrough technologies of the Industrial Revolution.

The Western countries have experienced a development trajectory in which higher wages led to the invention of labour-saving technology, whose use drove up labour productivity and wages with it. The cycle repeats.

I’m not widely read in this area, but I don’t think I’ve encountered any prominent economic historians who favor the “Stable Dynamics” perspective (although some growth theorists appear to).[2]

2. The stable dynamics perspective is in tension with the extremely “local” nature of the Industrial Revolution.

Although a number of different countries were experiencing efflorescences in the early modern period, the Industrial Revolution was a pretty distinctly British (or, more generously, Northern European) phenomenon. An extremely disproportionate fraction of the key innovations were produced within Britain. During the same time period, for example, China is typically thought to have experienced only negligible technological progress (despite being similarly ‘advanced’ and having something like 30x more people). Economic historians also typically express strong skepticism that any country other than Britain (or at best its close neighbors) was moving toward an imminent industrial revolution of its own. See, for example, the passages I quote in this comment on the economy of early modern China.

This observation fits decently well with phase transition stories, such as Robert Allen’s: the British economy achieved ignition, then the fire spread to other states. The observation seems to fit less well, though, with the “stable dynamics” perspective. Why should the Industrial Revolution have happened in a very specific place, which held only a tiny portion of the world’s population and which was until recently only an economic ‘backwater’?

Mokyr expresses skepticism on similar grounds (p. 36-37).

3. There has been vast cross-country variation in growth rates, which isn’t explained by differences in scale

In modern times, there are many examples of countries that have experienced consistently low growth rates relative to others. This suggests that there can be fairly persistent barriers to growth, other than insufficient scale, which cause growth rates to be substantially lower than they otherwise would be. As an extreme example, South Korea’s GDP growth rate may have been about an order-of-magnitude higher than North Korea’s for much of its history: despite many other similarities, institutional barriers were sufficient to keep North Korea’s growth rate far lower. (The start of South Korea’s “growth miracle” also seems like it could be pretty naturally described as a phase transition.)

At least in principle, it seems plausible that some barriers to growth -- institutional, cultural, or material -- affected all countries before the Industrial Revolution but only affected some afterward. Along a number of dimensions, states that are growing quickly today used to be a lot more similar to states that are growing slowly today. They also faced a number of barriers to growth (e.g. the need to rely entirely on ‘organic’ sources of energy; the inability to copy or attract investments from ultra-wealthy countries; etc.) that even the poorest countries typically don’t have today.

Acemoglu makes a similar point, in his growth textbook, when talking about the Kremer model (p. 114):

This discussion therefore suggests that models based on economies of scale of one sort or another do not provide us with fundamental causes of cross-country income differences. At best, they are theories of growth of the world taken as a whole. Moreover, once we recognize that the modern economic growth process has been uneven, meaning that it took place in some parts of the world and not others, the appeal of such theories diminishes further. If economies of scale were responsible for modern economic growth, this phenomenon should also be able to explain when and where this process of economic growth started. Existing models based on economies of scale do not. In this sense, they are unlikely to provide the fundamental causes of modern economic growth.

4. It’s not too hard to develop formal growth models that exhibit phase transitions

For example, there are models that formalize Robert Allen’s theory, “two sector” models in which the industrial sector overtakes the agricultural sector (and causes the growth rate to increase) once a certain level of technological maturity is reached, models in which physical capital and human capital are complementary (and a shock that increases capital-per-worker makes it rational to start investing in human capital), and models in which insufficient property protections limit the rate of growth (by capping incentives to innovate and invest). For example, here’s a classic two sector model.

I don’t necessarily “buy” any of these specific models, but they do suffice to show that there are a number of different ways you could potentially get phase transitions in economic growth processes.

5. The key forces driving and constraining post-industrial growth seem fairly different from the key forces driving and constraining pre-industrial growth

Technological and (especially) scientific progress, or what Mokyr calls “the growth of useful knowledge,” seems to play a much larger role in driving post-industrial growth than it did in driving pre-industrial growth. For example, based on my memory of the book The Economic History of China, a really large portion of China’s economic growth between 200AD and 1800AD seems to be attributed to new crops (first early ripening rice from Champa, then American crops); to land reclamation (e.g. turning marshes into rice paddies; terracing hills; planting American crops where rice and wheat wouldn’t grow); and to the more efficient allocation of resources (through expanding markets or changes in property rights). The development or improvement of machines, or even the development or improvement of agricultural and manufacturing practices, doesn’t seem to have been a comparably big deal. The big growth surge that both Europe and China experienced in the early modern period, and which may have partly set Britain for its Industrial Revolution, also seems to be mostly a matter of market expansion and new crops.

For example, Mokyr again:

The [historical] record is that despite many significant, even pathbreaking innovations in many societies since the start of written history, [technological progress] has not really been a major factor in economic growth, such as it was, before the Industrial Revolution.... The Industrial Revolution, then, can be regarded not as the beginnings of growth altogether but as the time at which technology began to assume an ever-increasing weight in the generation of growth and when economic growth accelerated dramatically ("Long-Term Economic Growth and the History of Technology," pg. 1116-118).

There are also a couple obvious material constraints that apply much more strongly in pre-industrial than post-industrial societies. First, agricultural production is limited by the supply of fertile land in a way that industrial production (or the production of services) is not; if you double capital and labor, without doubling land, agricultural production will tend to exhibit more sharply diminishing returns.

Second, and probably more importantly, pre-industrial economic production relies almost entirely on ‘organic’ sources of energy. If you want to make something, or move something, then the necessary energy will typically come from: (a) you eating plants, (b) you feeding plants to an animal, or (c) you burning plants. Wind and water can also be used, but you have no way of transporting or storing the energy produced; you can’t, for example, use the energy from a waterwheel to power something that’s not right next to the waterwheel. This all makes it just really, really hard to increase the amount of energy used per person beyond a certain level. Transitioning away from ‘organic’ sources of energy to fossil fuels, and introducing means of storing/transmitting/transforming energy, intuitively seems to remove a kind of soft ceiling on growth. (Some people who have made a version of this point are: Vaclav Smil, Ian Morris, John Landers, and Jack Goldstone. It's also sort of implicit in Robert Allen's model.) It’s especially notable that, for all but the most developed countries, total energy consumption within a state tends to be fairly closely associated with total economic output.


To be clear, this super long addendum has only focused on reasons to take the “phase transition hypothesis" seriously. I’ve only presented one side. But I thought it might still be useful to do this, since the reasons to take the “phase transition perspective” seriously are probably less obvious than the reasons to take the “constant dynamics perspective” seriously.


  1. Of course, my descriptions of these two perspectives are far from mathematically precise. There is some ambiguity about what it means for one perspective to be “more true” than the other. This paper by Chad Jones, for example, describes a model that combines bits of the two perspectives. ↩︎

  2. As another point of clarification, growth theory work in this vein does tend to suggest that important changes happened during the nineteenth century: once productivity growth becomes fast enough, and people start to leave the Malthusian state, certain new dynamics come into play. However, the high rate of growth in the nineteenth century is understood to result from growth dynamics that have been essentially stable since early human history. ↩︎

Comment by ben-garfinkel on Does Economic History Point Toward a Singularity? · 2020-09-11T01:47:12.048Z · score: 3 (2 votes) · EA · GW

Actually, I believe the standard understanding of "technology" in economics includes institutions, culture, etc.--whatever affects how much output a society wrings from a given input. So all of those are by default in Kremer's symbol for technology, A. And a lot of those things plausibly could improve faster, in the narrow sense of increasing productivity, if there are more people, if more people also means more societies (accidentally) experimenting with different arrangements and then setting examples for others; or if such institutional innovations are prodded along by innovations in technology in the narrower sense, such as the printing press.

Just on this point:

For the general Kremer model, where the idea production function is dA/dt = a(P^b)(A^c), higher levels of technology do support faster technological progress if c > 0. So you're right to note that, for Kremer's chosen parameter values, the higher level of technology in the present day is part of the story for why growth is faster today.

Although it's not an essential part of the story: If c = 0, then the growth is still hyperbolic, with the growth rate being proportional to P^(2/3) during the Malthusian period. I suppose I'm also skeptical that at least institutional and cultural change are well-modeled as resulting from the accumulation of new ideas: beneath the randomness, the forces shaping them typically strike me as much more structural.

Comment by ben-garfinkel on Does Economic History Point Toward a Singularity? · 2020-09-11T00:16:07.773Z · score: 3 (2 votes) · EA · GW

Hi David,

Thank you for this thoughtful response — and for all of your comments on the document! I agree with much of what you say here.

(No need to respond to the below thoughts, since they somehow ended up quite a bit longer than I intended.)

Kahneman and Tversky showed that incorporating perspectives that neglect inside information (in this case the historical specifics of growth accelerations) can reduce our ignorance about the future--at least, the immediate future. This practice can improve foreseight both formally--leading experts to take weighted averages of predictions based on inside and outside views--and informally--through the productive friction that occurs when people are challenged to reexamine assumptions. So while I think the feeling expressed in the quote is understandable, it's also useful to challenge it.

This is well put. I do agree with this point, and don’t want to downplay the value of taking outside view perspectives.

As I see it, there are a couple of different reasons to fit hyperbolic growth models — or, rather, models of form (dY/dt)/Y = aY^b + c — to historical growth data.

First, we might be trying to test a particular theory about the causes of the Industrial Revolution (Kremer’s “Two Heads” theory, which implies that pre-industrial growth ought to have followed a hyperbolic trajectory).[1] Second, rather than directly probing questions about the causes of growth, we can use the fitted models to explore outside view predictions — by seeing what the fitted models imply when extrapolated forward.

I read Kremer’s paper as mostly being about testing his growth theory, whereas I read the empirical section of your paper as mostly being about outside-view extrapolation. I’m interested in both, but probably more directly interested in probing Kremer’s growth theory.

I think that different aims lead to different emphases. For example: For the purposes of testing Kremer’s theory, the pre-industrial (or perhaps even pre-1500) data is nearly all that matters. We know that the growth rate has increased in the past few hundred years, but that’s the thing various theories are trying to explain. What distinguishes Kremer’s theory from the other main theories — which typically suggest that the IR represented a kind of ‘phase transition’ — is that Kremer’s predicts an upward trend in the growth rate throughout the pre-modern era.[2] So I think that’s the place to look.

On the other hand, if the purpose of model fitting is trend extrapolation, then there’s no particular reason to fit the model only to the pre-modern datapoint; this would mean pointlessly throwing out valuable information.

A lot of the reason I’m skeptical of Kremer’s model is that it doesn’t seem to fit very well with the accounts of economic historians and their descriptions of growth dynamics. His model seems to leave out too much and to treat the growth process as too homogenous across time. “Growth was faster in 1950AD than in 10,000BC mainly because there were more total ideas for new technologies each year, mainly because there were more people alive” seems really insufficient as an explanation; it seems suspicious that the model leaves out all of the other salient differences that typically draw economic historians’ attention. Are changes in institutions, culture, modes of production, and energetic constraints really all secondary enough to be slipped into the error term?[3]

But one definitely doesn’t need to ‘believe’ the Kremer model — which offers one explanation for why long-run growth would follow a consistent hyperbolic trajectory — to find it useful to make growth extrapolations using simple hyperbolic models. The best case for giving significant weight to the outside view extrapolations, as I understand it, is something like (non-quote):

We know that growth rates permanently increased in the centuries around the Industrial Revolution. Constant exponential growth models therefore fit long-run growth data terribly. Models of form (dY/dt)/Y = aY^b can fit the data much better, since they allow the growth rate to increase. If we fit one of these models to the long-run growth data (with an error term to account for stochasticity) we find that b > 0, implying hyperbolically increasing growth rates. Extrapolated forward, this implies that infinite rates are nearly inevitable in the future. While we we of course know that growth won’t actually become infinite, we should still update in the direction of believing that much faster growth is coming, because this is the simplest model that offers an acceptably good fit, and because we shouldn’t be too confident in any particular inside view model of how economic growth works.

I do think this line of thinking makes sense, but in practice don’t update that much. While I don’t believe any very specific ‘inside view’ story about long-run growth, I do find it easy to imagine that was a phase change of one sort or another around the Industrial Revolution (as most economic historians seem to believe). The economy has also changed enough over the past ten thousand years to make it intuitively surprising to me that any simple unified model — without phase changes or piecewise components — could actually do a good job of capturing growth dynamics across the full period.

I think that a more general prior might also be doing some work for me here. If there’s some variable whose growth rate has recently increased substantially, then a hyperbolic model — (dY/dt)/Y = a*Y^b, with b > 0 — will often be the simplest model that offers an acceptable fit. But I’m suspicious that extrapolating out the hyperbolic model will typically give you good predictions. It will more often turn out to be the case that there was just a kind of phase change.

To be clear, the paper seems to shift between two definitions of hyperbolic growth: usually it's B = 1 ("proportional"), but in places it's B > 0. I think the paper could easily be misunderstood to be rejecting B > 0 (superexponential growth/singularity in general) in places where it's actually rejecting B = 1 (superexponential growth/singularity with a particular speed). This is the sense in which I'd prefer less specificity in the statement of the hyperbolic growth hypothesis.

I think this is a completely valid criticism.

I agree that B > 0 is the more important hypothesis to focus on (and it’s of course what you focus on in your report). I started out investigating B = 1, then updated parts of the document to be about B > 0, but didn’t ultimately fully switch it over. Part of the issue is that B = 0 and B = 1 are distinct enough to support at least weak/speculative inferences from the radiocarbon graphs. This led me to mostly focus on B > 0 when talking about the McEvedy data, but focus on B = 1 when talking about the radiocarbon data. I think, though, that this mixing-and-matching has resulted in the document being somewhat confusing and potentially misleading in places.

To be more concrete, look back at the qualifiers in the HGH statement: "tended to be roughly proportional." Is the HGH, so stated, falsifiable? Or, more realistically, can it be assigned a p value? I think the answer is no, because there is no explicitly hypothesized, stochastic data generating process.

I think that this is also a valid criticism: I never really say outright what would count as confirmation, in my mind.

Supposing we had perfectly accurate data, I would say that a necessary condition for considering the data “consistent” with the hypothesis is something like: “If we fit a model of form (dP/dt)/P = a*P^b to population data from 5000BC to 1700AD, and use a noise term that models stochasticity in a plausible way, then the estimated value of b should not be significantly less than .5”

I only ran this regression using normal noise terms, rather than using the more theoretically well-grounded approach you’ve developed, so it’s possible the result would come out different if I reran it. But my concerns about data quality have also had a big influence on my sloppiness tolerance here: if a statistical result concerning (specifically) the pre-modern subset of the data is sufficiently sensitive to model specification, and isn’t showing up in bright neon letters, then I’m not inclined to give it much weight.

(These regression results ultimately don’t have a substantial impact on my views, in either direction.)

I believe this sort of fallacy is present in the current draft of Ben's paper, where it says, "Kremer’s primary regression results don’t actually tell us anything that we didn’t already know: all they say is that the population growth rate has increased."

I think this was an unclear statement on my part. I’m referring to the linear and non-linear regressions that Kremer runs on his population dataset (Tables II and IV), showing that population is significantly predictive of population growth rates for subsets that contain the Industrial Revolution. I didn’t mean to include his tests for heteroskedasticity or stability in that comment.

In my first attack on modeling long-term growth, I chose to put a lot of work into the simpler hyperbolic model because I saw an opportunity to improve is statistical expression, in particular by modeling how random growth shocks at each infinitesimal moment feed into the growth process to shape the probability distribution for growth over finite periods such as 10 years. This seemed potentially useful for two reasons. For one, since it was hard to do, it seemed better to do it in a simpler model first.

For another, it allowed a rigorous test of whether second-order effects--the apparently episodic character of growth accelerations--could be parsimoniously viewed as mere noise within a simpler, pattern of long-term acceleration. Within the particular structure of my model, the answer was no. For example, after being fit to the GWP data for 10,000 BCE to 1700 CE, my model is surprised at how high GWP was in 1820, assigning that outcome a p value of ~0.1. Ben's paper presents similar findings, graphically.

Just wanted to say that I believe this is useful too! Beyond the reasons you list here, I think that your modeling work also gives a really interesting insight into — and raises really interesting questions about — the potential for path-dependency in the human trajectory. I found it very surprising, for example, that re-rolling-out the fitted model from 10,000BC could give such a wide range of potential dates for the growth takeoff.

But, as noted, it's not clear that stipulating an episodic character should in itself shift one's priors on the possibility of singularity-like developments.

I think that it should make a difference, although you’re right to suggest that the difference may not be huge. If we were fully convinced that the episodic model was right, then one natural outside view perspective would be: “OK, the growth rate has jumped up twice over the course of human history. What the odds it will happen at least once more?”

This particular outside view should spit out a greater than 50% probability, depending on the prior used. It will be lower than the probability that hyperbolic trend extrapolation outside view spits out, but, by any conventional standard, it certainly won’t be low!

Whichever view of economic history we prefer, we should make sure to have our seatbelts buckled.


  1. I’m saying Kremer’s “theory” rather than Kremer’s “model” to avoiding ambiguity: when I mention “models” in this comment I always mean statistical models, rather than growth models. ↩︎

  2. I don’t know, of course, if Kremer would actually frame the empirical part of the paper quite this way. But if all the paper showed is that growth increased around the Industrial Revolution, this wouldn’t really be a very new/informative result. The fact that he’s also saying something about pre-modern growth dynamics (potentially back to 1 million BC) seems like the special thing about the paper — and the thing the paper emphasizes throughout. ↩︎

  3. To stretch his growth theory in an unfair way: If there’s a slight low-hanging fruit effect, then the general theory suggests that — if you kept the world exactly as it was in 10000BC, but bumped its population up to 2020AD levels (potentially by increasing the size of the Earth) — then these hunter-gatherer societies would soon start to experience much higher rates of economic growth/innovation than what we’re experiencing today. ↩︎

Comment by ben-garfinkel on Asking for advice · 2020-09-09T17:53:59.426Z · score: 3 (2 votes) · EA · GW

I would also like to come out of the woodwork as someone who finds Calendly vaguely annoying, for reasons that are entirely opaque to me.

(Although it's also unambiguously more convenient for me when people send me Calendly links -- and, given the choice, I think I'd mostly like people to keep doing this.)

Comment by ben-garfinkel on Does Economic History Point Toward a Singularity? · 2020-09-09T16:25:01.772Z · score: 14 (4 votes) · EA · GW

If one believed the numbers on wikipedia, it seems like Chinese growth was also accelerating a ton and it was not really far behind on the IR, such that I wouldn't except to be able to easily eyeball the differences.

I believe the population surge is closely related to the European population surge: it's largely attributed to the Colombian exchange + expanded markets/trade. One of the biggest things is that there's an expansion in the land under cultivation, since potatoes and maize can be grown on marginal land that wouldn't otherwise work well for rice or wheat, and (probably) a decline in living standards that's offsetting the rise in population. From the book 1493 (ch. 5):

Neither rice nor wheat, China’s two most important staples, would grow in the shack people’s marginal land. The soil was too thin for wheat; on steep slopes, the irrigation for rice paddies requires building terraces, the sort of costly, hugely laborious capital improvement project unlikely to be undertaken by renters. Almost inevitably, they turned to American crops: maize, sweet potato, and tobacco. Maize (Zea mays) can thrive in amazingly bad land and grows quickly, maturing in less time than barley, wheat, and millet. Brought in from the Portuguese at Macao, it was known as “tribute wheat,” “wrapped grain,” and “jade rice.” Sweet potatoes will grow where even maize cannot, tolerating strongly acid soils with little organic matter and few nutrients....

In their quest for social stability, the Ming had prohibited people from leaving their home regions. Reversing course, the Qing actively promoted a westward movement. Much as the United States encouraged its citizens to move west in the nineteenth century and Brazil provided incentives to occupy the Amazon in the twentieth, China’s new Qing masters believed that filling up empty spaces was essential to the national destiny.... Lured by tax subsidies and cheap land, migrants from the east swarmed into the western hills.... They looked at the weathered, craggy landscape, so unwelcoming to rice—and they, too, planted American crops....

The amount of cropland soared, followed by the amount of food grown on that cropland, and then the population.

There's obviously a major risk of hindsight bias here, but I think there's almost a consensus among economic historians that China wasn't on track toward an industrial revolution anytime soon. There aren't really signs of innovation picking up during this period: "the prosperity engendered by quantitative growth in output masked the lack of significant innovation in productive technologies" (The Economic History of China, p. 336). Estimates seem to vary widely, and I don't know what the error bars are here, but the favored estimates in TECHC (taken from a Chinese-language paper by Liu Ti) also show the industrial sector of the economy actually shrinking by half between 1600 and 1840 and real per-capita incomes shrinking by about a quarter.

It's also a common view that China was entering a period of decline at the start of the nineteenth century (partly due to population pressure and ecological damage from land conversion). From the same book (p. 361):

[T]he economic growth of the nineteenth century could not be sustained indefinitely. There is considerable evidence that the Chinese economy had seriously begun to exhaust its productive capacities by 1800.

Basically, I think the story is that: There was another 2-3 century "efflorescence" in China, but it wasn't really associated with either technological innovation or an expansion of industry. The total population growth numbers were probably unusually big, relative to other efflorescences, but this doesn't imply that this was an unusually innovative period; the unusual size of the surge may just reflect the fact that there was a black-swan-ish ecological event (the sudden transfer of several New World crops) around the start of the period. The growth surge was unsustainable, as all previous growth surges had been, and China was on track to fall back down to a lower level of development.


EDIT: One more quote, from A Culture of Growth (p. 317; emph. mine):

We will never know whether without the rise of the West, the Orient would have been able to replicate something similar, given enough time. It seems unlikely, but there is no way of knowing if they would have stumbled upon steam power or the germ theory of disease. It is true that the consensus of modern scholarship has remained of the opinion that by 1800 the bulk of output in Chinese industry employed a technology very little different from that under the Song (Richardson, 1999, pp. 54–55). At the level of the economy as a whole, this is an overstatement: Chinese agriculture adopted new crops such as peanuts and sweet potatoes, some of which were introduced by the intercontinental ecological arbitrage practiced by European explorers in the sixteenth century. Stagnation is therefore too strong a word, but comparing Chinese technological achievements not only with those of the West but also with its own successes during the Song clearly indicates a decelerating progress. Elvin (1996, p. 93), after studying the missed opportunities of hydraulic technology adoption in China, concludes that there were strong and perceived needs, and few constraints in adopting such techniques. And yet there was minimal advance. China’s technological somnolence was rudely interrupted by the exposure to Western technology in the nineteenth century.

Comment by ben-garfinkel on Does Economic History Point Toward a Singularity? · 2020-09-08T23:41:49.790Z · score: 4 (3 votes) · EA · GW

My sense of that comes from: (i) in growth numbers people usually cite, Europe's growth was absurdly fast from 1000AD - 1700AD (though you may think those numbers are wrong enough to bring growth back to a normal level) (ii) it seems like Europe was technologically quite far ahead of other IR competitors.

I'm curious about your take. Is it that:

  • The world wasn't yet historically exceptional by 1700, there have been other comparable periods of rapid progress. (What are the historical analogies and how analogous do you think they are? Is my impression of technological sophistication wrong?)

  • 1700s Europe is quantitatively exceptional by virtue of being the furthest along example, but nevertheless there is a mystery to be explained about why it became even more exceptional rather than regressing to the mean (as historical exceptional-for-their-times civilizations had in the past). I don't currently see a mystery about this (given the level of noise in Roodman's model, which seems like it's going to be in the same ballpark as other reasonable models), but it may be because I'm not informed enough about those historical analogies.

  • Actually the IR may have been inevitable in 1700s Europe but the exact pace seems contingent. (This doesn't seem like a real tension with a continuous acceleration model.)

  • Actually the contingencies you have in mind were already driving the exceptional situation in 1700.

[Caveat to all of the below is that these are vague impressions, based on scattered reading. I invite anyone with proper economic history knowledge to please correct me.]

I'm reasonably sympathetic to the first possibility. I think it’s somewhat contentious whether Europe or China was more ‘developed’ in 1700. In either case, though, my impression is that the state of Europe in 1700 was non-unprecedented along a number of dimensions.

The error bars are still pretty large here, but it’s common to estimate that Europe’s population increased by something like 50% between 1500 and 1700. (There was also probably a surge between something like 1000AD and 1300AD, as Western Europe sort of picked itself back up from a state of collapse, although I think the actual numbers are super unclear. Then the 14th century has famine and the Black Death, which Europe again needs to recover from.)

Something like a 50% increase over a couple centuries definitely couldn’t have been normal, but it’s also not clearly unprecedented. It seems like population levels in particular regions tended to evolve through a series of surges and contractions. We don't really know these numbers — although, I think, they’re at least inspired by historical records — but the McEvedy/Jones estimates show a 100% population increase in two centuries during the Song Dynasty (1000AD - 1200AD). We super don't know most of these numbers, but it seems conceivable that other few-century efflorescences were associated with similar overall growth rates: for example, the Abbasid Caliphate, the Roman Republic/Empire during its rise, the Han dynasty, the Mediterranean in the middle of the first century BCE.

These numbers are also presumably sketchy, but England’s estimated GDP-per-capita in 1700AD was also roughly the same as China’s estimated GDP-per-capita in 1000AD (according to a chart in British Economic Growth, 1270-1870); England is also thought to have been richer than other European states, with the exception of the Netherlands.

My impression is that Northwestern Europe’s growth from 1500 to 1700 also wasn’t super innovation-driven: a lot of it was about stuff like expanded trade networks and better internal markets. The maritime technology that supported global trade was enabled by innovation, but (I think) the technology wasn't obviously better than Chinese maritime technology in previous centuries. (E.g. Zheng He.) I think the technological progress that was happening at this point also wasn’t obviously more impressive than the sort of technological progress that happened in China in previous eras. Vaclav Smil (in Transforming the 20th Century) thinks the most technologically innovative time/place in history before 19th century Britain was early Han Dynasty China (roughly 200BC-1AD). The Song Dynasty (1000AD-1300AD) also often gets brought up. I don’t personally know a lot of details about the innovations produced during these periods, although I believe a number of them were basically early (and sometimes better) versions of later European innovations. One specific claim I've encountered is that the volume of iron/steel production was plausibly about the same in 1000AD Song China and in 1700AD Europe.

Here is one good/classic paper on previous economic efflorescences and their implications for our understanding of the Industrial Revolution. I also pulled out a few different long quotes, to make a 3 page summary version here.

Comment by ben-garfinkel on Does Economic History Point Toward a Singularity? · 2020-09-08T12:14:05.877Z · score: 10 (3 votes) · EA · GW

Thanks for the feedback! I probably ought to have said more in the summary.

Essentially:

  • For the 'old data': I run a non-linear regression on the population growth rate as a function of population, for a dataset starting in 10000BC. The function is (dP/dt)/P = a*P^b, where P represents population. If b = 0, this corresponds to exponential growth. If b = 1, this corresponds to the strict version of the Hyperbolic Growth Hypothesis. If 0 < b < 1, this still corresponds to hyperbolic growth, although the growth rate is less than proportional to the population level. I found that if you start in 10000BC and keep adding datapoints, b is not significantly greater than 0 until roughly 1750 (although it is significantly less than 1). Here's a graph of how the value evolves.

    • Since the datapoints are unevenly spaced, it can make sense to weigh them in proportion to the length of the interval used to estimate the growth rate for that datapoint. If you do this, then b is actually significantly greater than 0 (although is still less than 1) for most of the interval. However, this is mostly driven by a single datapoint for the period from 10,000BC to 5,000BC. If you remove this single datapoint, which roughly corresponds to the initial transition to agriculture, then b again isn't significantly greater than 0 until roughly the Industrial Revolution. (Here are the equivalent graphs, with and without the initial datapoint.)

    • A key point is that, if you fit this kind of function to a dataset that includes a large stable increase in the growth rate, you'll typically find that b > 0. (For example: If you run a regression on a dataset where there's no growth before 1700AD, but steady 2% growth after 17000AD, you'll find that b is significantly greater than zero.) Mainly, it's a test of whether there's been a stable increase in the growth rate. So running the test on the full dataset (including the period around the IR) doesn't help us much to distinguish the hyperbolic growth story from the 'phase change'/'inflection point' story. Kremer's paper mainly emphasizes the fact that b approximately equals 1, when you run the regression on the full dataset; I think too much significance has sometimes been attributed to this finding.

    • If you just do direct curve fitting to the data -- comparing an exponential function and a hyperbolic function for b = 1 -- the exponential function is also a better fit for the period from 5000BC until the couple centuries before the Industrial Revolution. Both functions are roughly similarly bad if you throw in the 10,000BC datapoint. This comparison is just based on the mean squared errors of the two fits.

    • But I also think this data is really unreliable -- I'd classify a lot of the data points as something close to 'armchair guesses' -- so I don't think we should infer much either way.

  • There are also more recent datasets for particular regions (e.g. China) that estimate historical population growth curves on the basis of the relative number archeological deposits (such as human remains and charcoal) that have been dated to different time periods. There are various corrections that people do to try to account for things like the tendency of deposits to disappear or be destroyed over time. I found that it was a pain to recreate these population curves, from the available datasets, so I actually didn't do any proper statistical analysis using them. (Alex Lintz is currently doing this.)

    • I went entirely off of the graphs and summary statistics given in papers analyzing these datasets, which tend to be interested in pretty different questions. In short: Most of the graphs show pretty huge and condensed growth spikes, which the authors often attribute to the beginning of intensive agriculture within the region; in many of the graphs, the spikes are followed by roughly flat or even declining population levels. The implied population growth rates for the few-thousand-year-periods containing the spikes are also typically comparable to the (admittedly unreliable) population growth rates that people have estimated for the period from 1AD to 1500AD.
Comment by ben-garfinkel on Does Economic History Point Toward a Singularity? · 2020-09-08T11:47:21.740Z · score: 6 (4 votes) · EA · GW

Economic histories do tend to draw casual arrows between several of these differences, sometimes suggesting a sort of chain reaction, although these narrative causal diagrams are admittedly never all that satisfying; there’s still something mysterious here.

Just to make this more concrete:

One example of an IR narrative that links a few of these changes together is Robert Allen's. To the extent that I understand/remember it, the narrative is roughly: The early modern expansion of trade networks caused an economic boom in England, especially in textile manufacturing. As a result, wages in England became unusually high. These high wages created unusually strong incentives to produce labor-saving technology. (One important effect of the Malthusian conditions is that they make labor dirt cheap.) England, compared to a few other countries that had similarly high wages at other points in history, also had access to really unusually cheap energy; they had huge and accessible coal reserves, which they were already burning as a replacement for wood. The unusually high levels of employment in manufacturing and trade also supported higher levels of literacy and numeracy. These conditions came together to support the development of technologies for harnessing fossil fuels, in the 19th century, and the rise of intensive R&D; these may never have been economically rational before. At this point, there was now a virtuous cycle that allowed England's growth -- which was initially an unsustainable form of growth based on trade, rather than technological innovation -- to become both sustained and innovation-driven. The spark then spread to other countries.

This particular tipping point story is mostly a story about why growth rates increased from the 19th century onward, although the growth surge in the previous few centuries, largely caused by the Colombian exchange and expansion of trade networks, still plays an important causal role; the rapid expansion of trade networks drives British wages up and makes it possible for them to profitably employ a large portion of their population in manufacturing.

Comment by ben-garfinkel on Does Economic History Point Toward a Singularity? · 2020-09-08T10:43:40.772Z · score: 6 (4 votes) · EA · GW

I also pretty strongly have this intuition: the Kremer model, and the explanation it gives for the Industrial Revolution, is in tension with the impressions I've formed from reading the great divergence literature.

Although, to echo Max's comment, you can 'believe' the Kremer model without also thinking that an 18th/19th century Industrial Revolution was inevitable. It depends on how much noise you allow.

One of the main contributions in David Roodman's recent report is to improve our understanding of how noise/stochasticity can result in pretty different-looking growth trajectories, if you roll out the same hyperbolic growth model multiple times. For example, he fits a stochastic model to data from 10000BC to the present, then reruns the model using the fitted parameters. In something like a quarter of the cases, the model spits out a growth takeoff before 1AD.

I believe the implied confidence interval, for when the Industrial Revolution will happen, gets smaller and smaller as you move forward through history. I'm actually not sure, then, how inevitable the model says the IR would be by (e.g.) 1000AD. If it suggests a high level of inevitability in the timing, for instance implying the IR basically had to happen by 2000, then that would be cause for suspicion; the model would likely be substantially understating contingency.

(As one particular contingency you mention: It seems super plausible to me, especially, that if the Americas didn't turn out to exist, then the Industrial Revolution would have happened much later. But this seems like a pretty random/out-of-model fact about the world.)

Comment by ben-garfinkel on Does Economic History Point Toward a Singularity? · 2020-09-07T23:53:21.673Z · score: 4 (3 votes) · EA · GW

So to me it feels like as we add random stuff like "yeah there are revolutions but we don't have any prediction about what they will look like" makes the richer model less compelling. It moves me more towards the ignorant perspective of "sometimes acceleration happens, maybe it will happen soon?", which is what you get in the limit of adding infinitely many ex ante unknown bells and whistles to your model.

I agree the richer stories, if true, imply a more ignorant perspective. I just think it's plausible that the more ignorant perspective is the correct perspective.

My general feeling towards the evolution of the economy over the past ten thousand years, reading historical analysis, is something like: “Oh wow, this seems really complex and heterogeneous. It’d be very surprising if we could model these processes well with a single-variable model, a noise term, and a few parameters with stable values.” It seems to me like we may in fact just be very ignorant.

Does a discontinuous change from fossil-fuel use even fit the data? It doesn't seem to add up at all to me (e.g. doesn't match the timing of acceleration, there are lots of industries that seemed to accelerate without reliance on fossil fuels, etc.), but would only consider a deep dive if someone actually wanted to stake something on that.

Fossil fuels wouldn't be the cause of the higher global growth rates, in the 1500-1800 period; coal doesn't really matter much until the 19th century. The story with fossil fuels is typically that there was a pre-existing economic efflorescence that supported England's transition out of an 'organic economy.' So it's typically a sort of tipping point story, where other factors play an important role in getting the economy to the tipping point.

Is "intensive agriculture" a well-defined thing? (Not rhetorical.) It didn't look like "the beginning of intensive agriculture" corresponds to any fixed technological/social/environmental event (e.g. in most cases there was earlier agriculture and no story was given about why this particular moment would be the moment), it just looked like it was drawn based on when output started rising faster.

I'm actually unsure of this. Something that's not clear to me is to what extent the distinction is being drawn in a post-hoc way (i.e. whether intensive agriculture is being implicitly defined as agriculture that kicks off substantial population growth). I don’t know enough about this.

Doing a regression on yearly growth rates seems like a bad way to approach this.

I don't think I agree, although I’m not sure I understand your objection. Supposing we had accurate data, it seems like the best approach is running a regression that can accommodate either hyperbolic or exponential growth — plus noise — and then seeing whether we can reject the exponential hypothesis. Just noting that the growth rate must have been substantially higher than average within one particular millennium doesn’t necessarily tell us enough; there’s still the question of whether this is plausibly noise.

Of course, though, we have very bad data here -- so I suppose this point doesn't matter too much either way.

If you just keep listing things, it stops being a plausible source of a discontinuity---you then need to give some story for why your 7 factors all change at the same time. If they don't, e.g. if they just vary randomly, then you are going to get back to continuous change.

You don’t need a story about why they changed at roughly the same time to believe that they did change at roughly the same time (i.e. over the same few century period). And my impression is that that, empirically, they did change at roughly the same time. At least, this seems to be commonly believed.

I don’t think we can reasonably assume they’re independent. Economic histories do tend to draw casual arrows between several of these differences, sometimes suggesting a sort of chain reaction, although these narrative causal diagrams are admittedly never all that satisfying; there’s still something mysterious here. On the other hand, higher population levels strike me as a fairly unsatisfying underlying cause.

[[EDIT: Just to be clear, I don't think the phase-transition/inflection-point story is necessarily much more plausible than the noisy hyperbolic story. I don't have very resilient credences here. But I think that, in the absence of good long-run growth data, they're at least comparably plausible. I think that economic history narratives, the fairly qualitative differences between modern and pre-modern economies, and evidence from between-country variation in modern times count for at least as much as the simplicity prior.]]

Comment by ben-garfinkel on Does Economic History Point Toward a Singularity? · 2020-09-07T22:24:35.155Z · score: 3 (2 votes) · EA · GW

Also want to second this! (This is a far more extensive response and summary than I've seen on almost any EA forum post.)

Comment by ben-garfinkel on Does Economic History Point Toward a Singularity? · 2020-09-07T22:21:31.700Z · score: 6 (4 votes) · EA · GW

Hi Paul,

Thanks for your super detailed comment (and your comments on the previous version)!

You are basically comparing "Series of 3 exponentials" to a hyperbolic growth model. I think our default simple hyperbolic growth model should be the one in David Roodman's report (blog post), so I'm going to think about this argument as comparing Roodman's model to a series of 3 noisy exponentials.

I think that Hanson's "series of 3 exponentials" is the neatest alternative, although I also think it's possible that pre-modern growth looked pretty different from clean exponentials (even on average / beneath the noise). There's also a semi-common narrative in which the two previous periods exhibited (on average) declining growth rates, until there was some 'breakthrough' that allowed the growth rate to surge: I suppose this would be a "three s-curve" model. Then there's the possibility that the growth pattern in each previous era was basically a hard-to-characterize mess, but was constrained by a rough upper bound on the maximum achievable growth rate. This last possibility is the one I personally find most likely, of the non-hyperbolic possibilities.

(I think the pre-agricultural period is especially likely to be messy, since I would guess that human evolution and climate/environmental change probably explain the majority of the variation in population levels within this period.)

It feels like you think 3 exponentials is the higher prior model. But this model has many more parameters to fit the data, and even ignoring that "X changes in 2 discontinuous jumps" doesn't seem like it has a higher prior than "X goes up continuously but stochastically." I think the only reason we are taking 3 exponentials seriously is because of the same kind of guesswork you are dismissive of, namely that people have a folk sense that the industrial revolution and agricultural revolutions were discrete changes. If we think those folk senses are unreliable, I think that continuous acceleration has the better prior. And at the very least we need to be careful about using all the extra parameters in the 3-exponentials model, since a model with 2x more parameters should fit the data much better.

I think this is a good and fair point. I'm starting out sympathetic toward the breakthrough/phase-change perspective, in large part because this perspective fits well with the kinds of narratives that economic historians and world historians tend to tell. It's reasonable to wonder, though, whether I actually should give much weight to these narratives. Although they rely on much more than just world GDP estimates, their evidence base is also far from great, and they disagree on a ton of issues (there are a bunch of competing economic narratives that only partly overlap.)

A lot of my prior comes down to my impression that the dynamics of growth just *seem * very different to me for forager societies, agricultural/organic societies, and industrial/fossil-fuel societies. In the forager era, for example, it's possible that, for the majority of the period, human evolution was the main underlying thing supporting growth. In the farmer era, the main drivers were probably land conversion, the diffusion and further evolution of crops/animals, agricultural capital accumulation (e.g. more people having draft animals), and piecemeal improvements in farming/land-conversion techniques discovered through practice. I don’t find it difficult to imagine that the latter drivers supported higher growth rates. For example: the fact that non-sedentary groups can’t really accumulate capital, in the same way, seems like a pretty fundamental distinction.

The industrial era is, in comparison, less obviously different from the farming era, but it also seems pretty different. My list of pretty distinct features of pre-modern agricultural economies is: (a) the agricultural sector constituted the majority of the economy; (b) production and (to a large extent) transportation were limited by the availability of agricultural or otherwise ‘organic’ sources of energy (plants to power muscles and produce fertiliser); (c) transportation and information transmission speeds were largely limited by windspeed and the speed of animals; (d) nearly everyone was uneducated, poor, and largely unfree; (e) many modern financial, legal, and political institutions did not exist; (f) certain cultural attitudes (such as hatred of commerce and lack of belief in the possibility of progress) were much more common; and (g) scientifically-minded research and development projects played virtually no role in the growth process.

I also don’t find it too hard to believe that some subset of these changes help to explain why modern industrialised economies can grow faster than premodern agricultural economies: here, for example, is a good book chapter on the growth implications of relying entirely on ‘organic’ sources of energy for production. The differences strike me as pretty fundamental and pretty extensive. Although this impression is also pretty subjective and could easily amount to seeing dividing lines where they don’t exist.

Another piece of evidence is that there’s extreme between-states variation in the growth rates, in modern times, which isn’t well-explained by factors like population size. We’ve seen that it is possible for something to heavily retard/bottleneck growth (e.g. bad political institutions), then for growth to surge following the removal of the bottleneck. It's not too hard to imagine that pre-modern states had lots of blockers. They were in some way similar to 20th/21st century growth basket cases, only with some important extra growth retardants -- like a lack of fossil fuels and artificial fertilizer, a lack of knowledge that material progress is possible, etc. -- thrown on top.

There may also be some fundamental meta-prior that matters, here, about the relative weight one ought to give to simple unified models vs. complex qualitative/multifactoral stories.

On top of that, the post-1500 data is fit terribly by the "3 exponentials" model. Given that continuous acceleration very clearly applies in the only regime where we have data you consider reliable, and given that it already seemed simpler and more motivated, it seems pretty clear to me that it should have the higher prior, and the only reason to doubt that is because of growth folklore.

I don’t think the post-1500 data is too helpful help for distinguishing between the ‘long run trend’ and ‘few hundred year phase transition’ perspectives.

If there was something like a phase transition, from pre-modern agricultural societies to modern industrial societies, I don’t see any particular reason to expect the growth curve during the transition to look like the sum of two exponentials. (I especially don’t expect this at the global level, since diffusion dynamics are so messy.)

The data is also still pretty bad. While, I think, we can be pretty confident that there was a lot of growth between 1500 and 1800 (way more than between 1200 and 1500), the exact shape of this curve is still really uncertain. The global population estimates are still ‘guesstimates’ for most part of the world, throughout this period. Even the first half of the twentieth century is pretty sketchy; IIRC, as late as the 1970s, there were attempts to estimate the present population of China that differed by up to 15%. (I think the Atlas of World Population History mentions this.) We shouldn’t read too much into the exact curve shape.

A further complication is that there’s a pretty unusual ecological event at the start of the period. Although this is pretty uncertain, the pretty abrupt transfer of species from the New World to the Old World (esp. potatoes and corn) is thought to be a major cause of the population surge. This strikes me as a sort of flukey one-off event that obscures the ‘natural’ growth dynamics for this period; although, you could also view it as endogenous to technological progress.

In particular, although standard estimates of growth from 1AD to 1500AD are significantly faster than growth between 10kBC and 1AD, those estimates are sensitive to factor-of-1.5 error in estimates of 1AD population, and real errors could easily be much larger than that.

I wouldn't necessarily say they were significantly faster. It depends a bit on exactly how you run this test, but, when I run a regression for "(dP/dt)/P = a*P^b" (where P is population) on the dataset up until 1700AD, I find that the b parameter is not significantly greater than 0. (The confidence interval is roughly -.2 to .5, with zero corresponding to exponential growth.)

Of course, though, the badness of the data cancels out this finding; it doesn't really matter if there's not a significant difference, according to the data, if the data isn't reliable.

Even taking the radiocarbon data as given I don't agree with the conclusions you are drawing from that data. It feels like in each case you are saying "a 2-exponential model fits fine" but the 2 exponentials are always different. The actual events (either technological developments or climate change or population dynamics) that are being pointed to as pivotal aren't the same across the different time series and so I think we should just be analyzing these without reference to those events (no suggestive dotted lines :) )

The papers typically suggest that the thing kicking off the growth surge, within a particular millennium, is the beginning of intensive agriculture in that region — so I don’t think the pivotal triggering event is really different. Although I haven’t done any investigation into how legit these suggestions are. It’s totally conceivable that we basically don’t know when intensive agriculture began in these different areas, or that the transition was so smeared out that it’s basically arbitrary to single out any particular millennium as special. If the implicit dotted lines are being drawn post-hoc, then that would definitely be cause for suspicion about the story being told.

I currently don't trust the population data coming from the radiocarbon dating. My current expectation is that after a deep dive I would not end up trusting the radiocarbon dating at all for tracking changes in the rate of population growth when the populations in question are changing how they live and what kinds of artifacts they make (from my perspective, that's what happened with the genetics data, which wasn't caveated so aggressively in the initial draft I reviewed). I'd love to hear from someone who actually knows about these techniques or has done a deep dive on these papers though.

I’m also pretty unsure of this. I’d maybe give about a 1/3 probability to them being approximately totally uninformative, for the purposes of distinguishing the two perspectives. (I think the other datasets are probably approximately totally uninformative.) Although the radiocarbon dates are definitely more commonly accepted as proxies for historic human population levels than the genetic data, there are also a number of skeptical papers. I haven’t looked deeply enough into the debate, although I probably ought to have.

Comment by ben-garfinkel on Does Economic History Point Toward a Singularity? · 2020-09-07T10:11:22.728Z · score: 4 (3 votes) · EA · GW

Thanks for the clarifying comment!

I'd hoped that effective population size growth rates might be at-least-not-completely-terrible proxies for absolute population size growth rates. If I remember correctly, some of these papers do present their results as suggesting changes in absolute population size, but I think you're most likely right: the relevant datasets probably can't give us meaningful insight into absolute population growth trends.

Comment by ben-garfinkel on Does Economic History Point Toward a Singularity? · 2020-09-03T22:11:54.465Z · score: 14 (9 votes) · EA · GW

I should have been clearer in the summary: the hypothesis refers to the growth rate of total economic output (GDP) rather than output-per-person (GDP per capita). Output-per-person is typically thought to have been pretty stagnant until roughly the Industrial Revolution, although just how stagnant it was is controversial. Total output definitely did grow substantially, though.

What l'm calling the Hyperbolic Growth Hypothesis is at least pretty mainstream. Michael Kremer's paper is pretty classic (it's been cited about 2000 times) and some growth theory textbooks repeat its main claim. Although I don't have a great sense of exactly how widely accepted it is.

Comment by ben-garfinkel on Ben Garfinkel's Shortform · 2020-09-03T18:19:14.748Z · score: 3 (2 votes) · EA · GW

As an aside, I'm not sure I agree that reducing safety-related externalities is largely an engineering problem, unless we include social engineering. Things like organizational culture, checklists, maintenance policies, risk assessments, etc., also seem quite important to me. (Or in the nuclear policy example even things like arms control, geopolitics, ...)

I think this depends a bit what class of safety issues we're thinking about. For example, a properly functioning nuke is meant to explode and kills loads of people. A lot of nuclear safety issues are then borderline misuse issues: people deciding to use them when really they shouldn't, for instance due to misinterpretations of others' actions. Many other technological 'accident risks' are less social, although never entirely non-social (e.g. even in the case of bridge safety, you still need to trust some organization to do maintenance/testing properly.)

That seems correct all else equal. However, it can be outweighed by actors seeking relative gains or other competitive pressures.

I definitely don't want to deny that actors can sometimes have incentives to use badly world-worseningly unsafe technologies. But you do need the right balance of conditions to hold: individual units of the technology need to offer their users large enough benefits and small enough personal safety risks, need to create large enough external safety risks, and need to have safety levels that increase slowly enough over time.

Weapons of mass destruction are sort of special in this regard. They can in some cases have exceptionally high value to their users (deterring or preventing invasion), which makes them willing to bear unusually high risks. Since their purpose is to kill huge numbers of people on very short notice, there's naturally a risk of them killing huge numbers of people (but under the wrong circumstances). This risk is also unusually hard to reduce over time, since it's often more about people making bad decisions than it is about the technology 'misbehaving' per se; there is also a natural trade-off between increasing readiness and decreasing the risk of bad usage decisions being made. The risk also naturally falls very heavily on other actors (since the technology is meant to harm other actors).

I do generally find it easiest to understand how AI safety issues could make the world permanently worse when I imagine superweapon/WMD-like systems (of the sort that also seem to be imagined in work like "Racing to the Precicipe"). I think existential safety risks become a much harder sell, though, if we're primarily imagining non-superweapon applications and distributed/gradual/what-failure-looks-like-style scenarios.

I also think it's worth noting that, on an annual basis, even nukes don't have a super high chance of producing global catastrophes through accidental use; if you have a high enough discount rate, and you buy the theory that they substantially reduce the risk of great power war, then it's even possible (maybe not likely) that their existence is currently positive EV by non-longtermist lights.

Comment by ben-garfinkel on Ben Garfinkel's Shortform · 2020-09-03T15:55:56.430Z · score: 7 (5 votes) · EA · GW

Some thoughts on risks from unsafe technologies:

It’s hard for the development of an unsafe technology to make the world much worse, in expectation, if safety failures primarily affect the technology’s users.

For example: If the risk of dying in a plane crash outweighs the value of flying, too badly, then people won’t fly. If the risk of dying doesn’t outweigh the benefit, then people will fly, and they’ll be (on average) better off despite occasionally dying. Either way, planes don’t make the world worse.

For an unsafe technology to make the world much worse, the risk from accidents will typically need to fall primarily on non-users. Unsafe technologies that primarily harm non-users (e.g. viruses that can escape labs) are importantly different than unsafe technologies that primarily harm users (e.g. bridges that might collapse). Negative externalities are essential to the story.

Overall, though, I tend to worry less about negative externalities from safety failures than I do about negative externalities from properly functioning technologies. Externalities from safety failures grow the more unsafe the technology is; but, the more unsafe the technology is, the less incentive anyone has to develop or use it. Eliminating safety-related externalities is also largely an engineering problem, that everyone has some incentive to solve. We therefore shouldn’t expect these externalities to stick around forever — unless we lose our ability to modify the technology (e.g. because we all die) early on. On the other hand, if the technology produces massive negative externalities even when it works perfectly, it's easier to understand how its development could make the world badly and lastingly worse.

Comment by ben-garfinkel on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-08-06T22:43:02.407Z · score: 1 (1 votes) · EA · GW

Hi Ofer,

Thanks for the comment!

I actually do think that the instrumental convergence thesis, specifically, can be mapped over fine, since it's a fairly abstract principle. For example, this recent paper formalizes the thesis within a standard reinforcement learning framework. I just think that the thesis at most weakly suggests existential doom, unless we add in some other substantive theses. I have some short comments on the paper, explaining my thoughts, here.

Beyond the instrumental convergence thesis, though, I do think that some bits of the classic arguments are awkward to fit onto concrete and plausible ML-based development scenarios: for example, the focus on recursive self-improvement, and the use of thought experiments in which natural language commands, when interpretted literally and single-mindedly, lead to unforeseen bad behaviors. I think that Reframing Superintelligence does a good job of pointing out some of the tensions between classic ways of thinking and talking about AI risk and current/plausible ML engineering practices.

For the sake of concreteness, consider the algorithm that Facebook uses to create the feed that each user sees (which is an example that Stuart Russell has used). Perhaps there's very little public information about that algorithm, but it's reasonable to guess they're using some deep RL algorithm and a reward function that roughly corresponds to user engagement. Conditioned on that, do you agree that in the limit (i.e. when using whatever algorithm and architecture they're currently using, at a sufficiently large scale), the arguments about instrumental convergence seem to apply?

This may not be what you have in mind, but: I would be surprised if the FB newsfeed selection algorithm became existentially damaging (e.g. omnicidal), even in the limit of tremendous amounts of training data and compute. I don't know the algorithm actually works, but as a simplication: let's imagine that it produces an ordered list of posts to show a user, from the set of recent posts by their friends, and that it's trained using something like the length of the user's FB browsing session as the reward. I think that, if you kept training it, nothing too weird would happen. It might produce some unintended social harms (like addiction, polarization, etc.), but the system wouldn't, in any meaningful sense, have long-run objectives (due to the shortness of sessions). It also probably wouldn't have the ability or inclination to manipulate the external world in the pursuit of complex schemes. Figuring out how to manipulate the external world in precise ways would require a huge amount of very weird exploration, deep in a section of the space of possible policies where most of the policies are terrible at maximizing reward; in the unlikely event that the necessary exploration happened, and the policy started moving in this direction, I think it would be conspicuous before the newsfeed selection algorithm does something like kill everyone to prevent ongoing FB sessions from ending (if this is indeed possible given the system's limited space of possible actions.)

Comment by ben-garfinkel on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-08-03T13:31:02.862Z · score: 2 (2 votes) · EA · GW

The key difference is that I don't think orthogonality thesis, instrumental convergence or progress being eventually fast are wrong - you just need extra assumptions in addition to them to get to the expectation that AI will cause a catastrophe.

Quick belated follow-up: I just wanted to clarify that I also don't think that the orthogonality thesis or instrumental convergence thesis are incorrect, as they're traditionally formulated. I just think they're not nearly sufficient to establish a high level of risk, even though, historically, many presentations of AI risk seemed to treat them as nearly sufficient. Insofar as there's a mistake here, the mistake concerns way conclusions have been drawn from these theses; I don't think the mistake is in the theses themselves. (I may not stress this enough in the interview/slides.)

On the other hand, progress/growth eventually becoming much faster might be wrong (this is an open question in economics). The 'classic arguments' also don't just predict that growth/progress will become much faster. In the FOOM debate, for example, both Yudkowsky and Hanson start from the position that growth will become much faster; their disagreement is about how sudden, extreme, and localized the increase will be. If growth is actually unlikely to increase in a sudden, extreme, and localized fashion, then this would be a case of the classic arguments containing a "mistaken" (not just insufficient) premise.

Comment by ben-garfinkel on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-25T20:05:27.061Z · score: 3 (3 votes) · EA · GW

Because instead of disaster striking only if we can't figure out the right goals to give to the AI, it can also be the case that we know what goals we want to give it, but due to constraints of the development process, we can't give it those goals and can only build AI with unaligned goals. So it seems to me that the latter scenario can also be rightly described as "exogenous deadline of the creep of AI capability progress". (In both cases, we can try to refrain from developing/deploying AGI, but it may be a difficult coordination problem for humanity to stay in a state where we know how to build AGI but chooses not to, and in any case this consideration cuts equally across both scenarios.)

I think that the comment you make above is right. In the podcast, we only discuss this issue in a super cursory way:

(From the transcript) A second related concern, which is a little bit different, is that you could think this is an argument against us naively going ahead and putting this thing out into the world that’s as extremely misaligned as a dust minimizer or a paperclip maximizer, but we could still get to the point where we haven’t worked out alignment techniques.... No sane person would keep running the dust minimizer simulation once it’s clear this is not the thing we want to be making. But maybe not everyone is the same. Maybe someone wants to make a system that pursues some extremely narrow objective like this extremely effectively, even though it would be clear to anyone with normal values that you’re not in the process of making a thing that you want to actually use. Maybe somebody who wants to cause destruction could conceivably plough ahead. So that might be one way of rescuing a deadline picture. The deadline is not when will people have intelligent systems that they naively throw out into the world. It’s when do we reach the point where someone wants to create something that, in some sense, is intuitively pursuing a very narrow objective, has the ability to do that.

Fortunately, I'm not too worried about this possibility. Partly, as background, I expect us to have moved beyond using hand-coded reward functions -- or, more generally, what Stuart Russell calls the "standard model" -- by the time we have the ability to create broadly superintelligent and highly agential/unbounded systems. There are really strong incentives to do this, since there are loads of useful applications that seemingly can't be developed using hand-coded reward functions. This is some of the sense in which, in my view, capabilities and alignment research is mushed up. If progress is sufficiently gradual, I find it hard to imagine that the ability to create things like world-destroying paperclippers comes before (e.g.) the ability to make at least pretty good use of reward modeling techniques.

(To be clear, I recognize that loads of alignment researchers also think that there will be strong economic incentives for alignment research. I believe there's a paragraph in Russell's book arguing this. I think DM's "scalable agent alignment" paper also suggests that reward modeling is necessary to develop systems that can assist us in most "real world domains." Although I don't know how much optimism other people tend to take from this observation. I don't actually know, for example, whether or not Russell is less optimisic than me.)

If we do end up in a world where people know they can create broadly superintelligent and highly agential/unbounded AI systems, but we're still haven't worked out alternatives to Russell's "standard model," then no sane person really has any incentive to create and deploy these kinds of systems. Training up a broadly superintelligent and highly agential system using something like a hand-coded reward function is likely to be an obviously bad idea; if it's not obviously bad, a priori, then it will likely become obviously bad during the training process. There wouldn't be much of a coordination problem, since, at least in normal circumstances, no one has an incentive to knowingly destroy themselves.

If I then try to tell a story where humanity goes extinct, due to a failure to move beyond the standard model in time, two main scenarios come to mind.

Doomsday Machine: States develop paperclipper-like systems, while thinking of them as doomsday machines, to serve as a novel alternative or complement to nuclear deterrents. They end up being used, either accidentally or intentionally.

Apocalyptic Residual: The ability to develop paperclipper-like systems diffuses broadly. Some of the groups that gain this ability have apocalyptic objectives. They groups intentionally develop and deploy the systems, with the active intention of destroying humanity.

The first scenario doesn't seem very likely to me. Although this is obviously very speculative, paperclippers seem much worse than nuclear or even biological deterrents. First, your own probability of survival, if you use a paperclipper, may be much lower than your probability of survival if you used nukes or biological weapons. Second, and somewhat ironically, it may actually be hard to convince people that your paperclipper system can actually do a ton of damage; it seems hard to know that the result would actually be as bad as feared, without real-world experience using it before. States would also, likely, be slow to switch to this new deterrence strategy, providing even more time for alignment techniques to be worked out. As a further bit of friction/disincentive, these systems might also just be extremely expensive (depending on compute or environment design requirements). Finally, for doomsday to occur, it's actually necessary for a paperclipper system to be used -- and for its effect to be as bad as feared. The history of nuclear weapons suggests that the annual probability of use is probably pretty low.

The second scenario also doesn't seem very likely to me, since: (a) I think there would probably be an initial period where large quantities of resources (e.g. compute and skilled engineers) are required to make world-destroying paperclippers. (b) Only a very small portion of people want to destroy the world. (c) There would be unusually strong incentives for states to prevent apocalyptic groups or individuals from gaining access to the necessary resources.

Although see Asya's "AGI in Vulnerable World" post for a discussion of some conditions under which malicious use concerns might loom larger.

(Apologies for the super long response!)

Comment by ben-garfinkel on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-25T20:01:24.394Z · score: 1 (1 votes) · EA · GW

I continue to have a lot of uncertainty about how likely it is that AI development will look like "there’s this separate project of trying to figure out what goals to give these AI systems" vs a development process where capability and goals are necessarily connected. (I didn't find your arguments in favor of the latter very persuasive.) For example it seems GPT-3 can be seen as more like the former than the latter. (See this thread for background on this.)

I don't think I caught the point about GPT-3, although this might just be a matter of using concepts differently.

In my mind: To whatever extent GPT-3 can be said to have a "goal," its goal is to produce text that it would be unsurprising to find on the internet. The training process both imbued it with this goal and made the system good at achieving it.

There are other things we might want spin-offs of GPT-3 to do: For example, compose better-than-human novels. Doing this would involve shifting both what GPT-3 is "capable" of doing and shifting what its "goal" is. (There's not really a clean practical or conceptual distinction between the two.) It would also probably require making progress on some sort of "alignment" technique, since we can't (e.g.) write down a hand-coded reward function that quantifies novel quality.

Comment by ben-garfinkel on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-23T16:16:25.288Z · score: 9 (6 votes) · EA · GW

Michael Huemer's "Ethical Intuitionism" and David Enoch's "Taking Morality Seriously" are both good; Enoch's book is, I think, better, but Huemer's book is a more quick and engaging read. Part Six of Parfit's "On What Matters" is also good.

I don't exactly think that non-naturalism is "plausible," since I think there are very strong epistemological objections to it. (Since our brain states are determined entirely by natural properties of the world, why would our intuitions about non-natural properties track reality?) It's more that I think the alternative positions are self-undermining or have implications that are unacceptable in other ways.

Comment by ben-garfinkel on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-21T14:28:09.089Z · score: 3 (2 votes) · EA · GW

I thought a bit about humans, but I feel that this is much more complicated and needs more nuanced definitions of goals. (is avoiding suffering a terminal goal? It seems that way, but who is doing the thinking in which it is useful to think of one thing or another as a goal? Perhaps the goal is to reduce specific neuronal activity for which avoiding suffering is merely instrumental?)

I'm actually not very optimistic about a more complex or formal definition of goals. In my mind, the concept of a "goal" is often useful, but it's sort of an intrinisically fuzzy or fundamentally pragmatic concept. I also think that, in practice, the distinction between an "intrinsic" and "instrumental" goal is pretty fuzzy in the same way (although I think your definition is a good one).

Ultimately, agents exhibit behaviors. It's often useful to try to summarize these behaviors in terms of what sorts of things the agent is fundamentally "trying" to do and in terms of the "capabilities" that the agent brings to bear. But I think this is just sort of a loose way of speaking. I don't really think, for example, that there are principled/definitive answers to the questions "What are all of my cat's goals?", "Which of my cat's goals are intrinsic?", or "What's my cat's utility function?" Even if we want to move beyond behavioral definitions of goals, to ones that focus on cognitive processes, I think these sorts of questions will probably still remain pretty fuzzy.

(I think that this way of thinking -- in which evolutionary or engineering selection processes ultimately act on "behaviors," which can only somewhat informally or imprecisely be described in terms of "capabilities" and "goals" -- also probably has an influence on my relative optimism about AI alignment. )

Comment by ben-garfinkel on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-20T23:45:42.370Z · score: 2 (2 votes) · EA · GW

Hi Sammy,

Thanks for the links -- both very interesting! (I actually hadn't read your post before.)

I've tended to think of the intuitive core as something like: "If we create AI systems that are, broadly, more powerful than we are, and their goals diverge from ours, this would be bad -- because we couldn't stop them from doing things we don't want. And it might be hard to ensure, as we're developing increasingly sophisticated AI systems, that there aren't actually subtle but extremely important divergences in some of these systems' goals."

At least in my mind, both the classic arguments and the arguments in "What Failure Looks Like" share this common core. Mostly, the challenge is to explain why it would be hard to ensure that there wouldn't be subtle-but-extremely-important divergences; there are different possible ways of doing this. For example: Although an expectation of discontinous (or at least very fast) progress is a key part of the classic arguments, I don't consider it part of the intuitive core; the "What Failure Looks Like" picture doesn't necessarily rely on it.

I'm not sure if there's actually a good way to take the core intuition and turn it into a more rigorous/detailed/compelling argument that really works. But I do feel that there's something to the intuition; I'll probably still feel like there's something to the intuition, even if I end feeling like the newer arguments have major issues too.

[[Edit: An alternative intuitive core, which I sort of gesture at in the interview, would simply be: "AI safety and alignment issues exist today. In the future, we'll have crazy powerful AI systems with crazy important responsibilities. At least the potential badness of safety and alignment failures should scale up with these systems' power and responsibility. Maybe it'll actually be very hard to ensure that we avoid the worst-case failures."]]

Comment by ben-garfinkel on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-20T22:45:05.771Z · score: 3 (2 votes) · EA · GW

In brief, I do actually feel pretty positively.

Even if governments aren't doing a lot of important AI research "in house," and private actors continue to be the primary funders of AI R&D, we should expect governments to become much more active if really serious threats to security start to emerge. National governments are unlikely to be passive, for example, if safety/alignment failures become increasingly damaging -- or, especially, if existentially bad safety/alignment failures ever become clearly plausible. If any important institutions, design decisions, etc., regarding AI get "locked in," then I also expect governments to be heavily involved in shaping these institutions, making these decisions, etc. And states are, of course, the most important actors for many concerns having to do with political instability caused by AI. Finally, there are also certain potential solutions to risks -- like creating binding safety regulations, forging international agreements, or plowing absolutely enormous amounts of money into research projects -- that can't be implemented by private actors alone.

Basically, in most scenarios where AI governance work turns out be really useful from a long-termist perspective -- because there are existential safety/alignment risks, because AI causes major instability, or because there are opportunities to "lock in" key features of the world -- I expect governments to really matter.

Comment by ben-garfinkel on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-20T22:10:12.028Z · score: 3 (2 votes) · EA · GW

I don't have a single top pick; I think this will generally depend on a person's particular interests, skills, and "career capital."

I do just want to say, though, that I don't think it's at all necessary to have a strong technical background to do useful AI governance work. For example, if I remember correctly, most of the research topics discussed in the "AI Politics" and "AI Ideal Governance" sections of Allan Dafoe's research agenda don't require a significant technical background. A substantial portion of people doing AI policy/governance/ethics research today also have a primarily social science or humanities background.

Just as one example that's salient to me, because I was a co-author on it, I don't think anything in this long report on distributing the benefits of AI required substantial technical knowledge or skills.

(That being said, I do think it's really important for pretty much anyone in the AI governance space to understand at least the core concepts of machine learning. For example, it's important to know things like the difference between "supervised" and "unsupervised" learning, the idea of stochastic gradient descent, the idea of an "adversarial example," and so on. Fortunately, I think this is pretty do-able even without a STEM background; it's mostly the concepts, rather than the math, that are important. Certain kinds of research or policy work certainly do require more in-depth knowledge, but a lot of useful work doesn't.)

Comment by ben-garfinkel on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-20T21:39:29.911Z · score: 5 (3 votes) · EA · GW

I think that my description of the thesis (and, actually, my own thinking on it) is a bit fuzzy. Nevertheless, here's roughly how I'm thinking about it:

First, let's say that an agent has the "goal" of doing X if it's sometimes useful to think of the system as "trying to do X." For example, it's sometimes useful to think of a person as "trying" to avoid pain, be well-liked, support their family, etc. It's sometimes useful to think of a chess program as "trying" to win games of chess.

Agents are developed through a series of changes. In the case of a "hand-coded" AI system, the changes would involve developers adding, editing, or removing lines of code. In the case of an RL agent, the changes would typically involve a learning algorithm updating the agent's policy. In the case of human evolution, the changes would involve genetic mutations.

If the "process orthogonality thesis" were true, then this would mean that we can draw a pretty clean line between between "changes that affect an agent's capabilities" and "changes that affect an agent's goals." Instead, I want to say that it's really common for changes to affect both capabilities and goals. In practice, we can't draw a clean line between "capability genes" and "goal genes" or between "RL policy updates that change goals" and "RL policy updates that change capabilities." Both goals and capabilities tend to take shape together.

That being said, it is true that some changes do, intuitively, mostly just affect either capabilities or goals. I wouldn't be surprised, for example, if it's possible to introduce a minus sign somewhere into Deep Blue's code and transform it into a system that looks like it's trying to lose at chess; although the system will probably be less good at losing than it was a winning, it may still be pretty capable. So the processes of changing a system's capabilities and changing its goals can still come apart to some degree.

It's also possible to do fundamental research and engineering work that is useful for developing a wide variety of systems. For example, hardware progress has, in general, made it easier to develop highly competent RL agents in all sorts of domains. But, when it comes time to train a new RL agent, its goals and capabilities will still take shape together.

(Hope that clarifies things at least a bit!)

Comment by ben-garfinkel on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-19T23:00:07.127Z · score: 6 (4 votes) · EA · GW

Yes, but they're typically invite-only.

Comment by ben-garfinkel on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-19T22:49:31.498Z · score: 4 (3 votes) · EA · GW

Interesting. So you generally expect (well, with 50-75% probability) AI to become a significantly bigger deal, in terms of productivity growth, than it is now? I have not looked into this in detail but my understanding is that the contribution of AI to productivity growth right now is very small (and less than electricity).

If yes, what do you think causes this acceleration? It could simply be that AI is early-stage right now, akin to electricity in 1900 or earlier, and the large productivity gains arise when key innovations diffuse through society on a large scale. (However, many forms of AI are already widespread.) Or it could be that progress in AI itself accelerates, or perhaps linear progress in something like "general intelligence" translates to super-linear impact on productivity.

I mostly have in mind the idea that AI is "early-stage," as you say. The thought is that "general purpose technologies" (GPTs) like electricity, the steam engine, the computer, and (probably) AI tend to have very delayed effects.

For example, there was really major progress in computing in the middle of the 20th century, and lots of really major invents throughout the 70s and 80s, but computers didn't have a noticeable impact on productivity growth until the 90s. The first serious electric motors were developed in the mid-19th century, but electricity didn't have a big impact on productivity until the early 20th. There was also a big lag associated with steam power; it didn't really matter until the middle of the 19th century, even though the first steam engines were developed centuries earlier.

So if AI takes several decades to have a large economic impact, this would be consistent with analagous cases from history. It can take a long time for the technology to improve, for engineers to get trained up, for complementary inventions to be developed, for useful infrastructure to be built, for organizational structures to get redesigned around the technology, etc. I don't think it'd be very surprising if 80 years was enough for a lot of really major changes to happen, especially since the "time to impact" for GPTs seems to be shrinking over time.

Then I'm also factoring in the additional possibility that there will be some unusually dramatic acceleration, which is distinguishes AI from most earlier GPTs.

Comment by ben-garfinkel on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-19T22:23:41.122Z · score: 3 (2 votes) · EA · GW

I would strongly consider donating to the long-term investment fund. (But I haven't thought enough about this to be sure.)

Comment by ben-garfinkel on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-19T22:19:24.420Z · score: 8 (4 votes) · EA · GW

Toby's estimate for "unaligned artificial intelligence" is the only one that I meaningfully disagree with.

I would probably give lower numbers for the other anthropogenic risks as well, since it seems really hard to kill virtually everyone, and since the historical record suggests that permanent collapse is unlikely. (Complex civilizations were independently developed multiple times; major collapses, like the Bronze Age Collapse or fall of the Roman Empire, were reversed after a couple thousand years; it didn't take that long to go from the Neolithic Revolution to the Industrial Revolution; etc.) But I haven't thought enough about civilizational recovery or, for example, future biological weapons to feel firm in my higher level of optimism.

Comment by ben-garfinkel on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-19T22:09:46.276Z · score: 4 (3 votes) · EA · GW

Those numbers sound pretty reasonable to me, but, since they're roughly my own credences, it's probably unsurprising that I'm describing them as "pretty reasonable" :)

On the other hand, depending on what counts as being "convinced" of the classic arguments, I think it's plausible they actually support a substantially higher probability. I certainly know that some people assign a significantly higher than 10% chance to an AI-based existential catastrophe this century. And I believe that Toby's estimate, for example, involved weighing up different possible views.

Comment by ben-garfinkel on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-19T21:53:18.348Z · score: 2 (2 votes) · EA · GW

I don't currently give them very much weight.

It seems unlikely to me that hardware progress -- or, at least, practically achievable hardware progress -- will turn out to be sufficient for automating away all the tasks people can perform. If both hardware progress and research effort instead play similarly fundamental roles, then focusing on only a single factor (hardware) can only give us pretty limited predictive power.

Also, to a lesser extent: Even it is true that compute growth is the fundamental driver of AI progress, I'm somewhat skeptical that we could predict the necessary/sufficient amount of compute very well.

Comment by ben-garfinkel on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-19T21:37:29.365Z · score: 3 (2 votes) · EA · GW

I don't think it's had a significant impact on my views about the absolute likelihood or tractability of other existential risks. I'd be interested if you think it should have, though!

Comment by ben-garfinkel on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-19T21:33:37.957Z · score: 3 (2 votes) · EA · GW

Partly, I had in mind a version of the astronomical waste argument: if you think that we should basically ignore the possibility of preventing extinction or pre-mature stagnation (e.g. for Pascal's mugging reasons), and you're optimistic about where the growth process is bringing us, then maybe we should just try to develop an awesome technologically advanced civilization as quickly as possible so that more people can ultimately live in it. IRRC Tyler Cowen argues for something at least sort of in this ballpark, in Stubborn Attachments. I think you'd need pretty specific assumptions to make this sort of argument work, though.

Jumping the growth process forward can also reduce some existential risks. The risk of humanity getting wiped out by a natural disasters, like asteroids, probably gets lower the more technologically sophisticated we become; so, for example, kickstarting the Industrial Revolution earlier would have meant a shorter "time of peril" for natural risks. Leo Aschenbrenner's paper "Economic Growth and Existential Risk" considers a more complicated version of this argument in the context of anthropogenic risks, which takes into account the fact that growth can also contribute to these risks.

Comment by ben-garfinkel on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-19T21:01:28.646Z · score: 6 (4 votes) · EA · GW

In brief, I feel positively about these broader attempts!

It seems like some of these broad efforts could be useful, instrumentally, for reducing a number of different risks (by building up the pool of available talent, building connections, etc.) The more unsure about what risks matter most, as well, the more valuable broad capacity-building efforts are.

It's also possible that some shifts in values, institutions, or ideas could actually be long-lasting. (This is something that Will MacAskill, for example, is currently interested in.) If this is right, then I think it's at least conceivable that trying to positively influence future values/institutions/ideas is more important than reducing the risk of global catastrophes: the goodness of different possible futures might vary greatly.

Comment by ben-garfinkel on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-19T17:12:38.481Z · score: 14 (7 votes) · EA · GW

I actually haven't seen The Boss Baby. A few years back, this ad was on seemingly all of the buses in Oxford for a really long time. Something about them made a lasting impression on me. Maybe it was the smug look on the boss baby's face.

Reviewing it purely on priors, though, I'll give it a 3.5 :)

Comment by ben-garfinkel on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-19T16:29:27.673Z · score: 19 (13 votes) · EA · GW

I'm not sure how unpopular these actually are, but a few at least semi-uncommon views would be:

  • I'm pretty sympathetic to non-naturalism, in the context of both normativity and consciousness

  • Controlling for tractability, I think it's probably more important to improve the future (conditional on humanity not going extinct) than to avoid human extinction. (The gap between a mediocre future or bad future and the best possible future is probably vast.)

  • I don't actually know what my credence is here, since I haven't thought much about the issue, but I'm probably more concerned about growth slowing down and technological progress stagnating than the typical person in the community

Comment by ben-garfinkel on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-19T15:02:58.465Z · score: 3 (2 votes) · EA · GW

Thanks so much for letting me know! I'm really glad to hear :)

Comment by ben-garfinkel on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-19T15:00:21.820Z · score: 10 (4 votes) · EA · GW

What is your overall probability that we will, in this century, see progress in artificial intelligence that is at least as transformative as the industrial revolution?

I think this is a little tricky. The main way in which the Industrial Revolution was unusually transformative is that, over the course of the IR, there were apparently unusually large pivots in several important trendlines. Most notably, GDP-per-capita began to increase at a consistently much higher rate. In more concrete terms, though, the late nineteenth and early twentieth centuries probably included even greater technological transformations.

From David Weil's growth textbook (pg. 265-266):

Given these two observations—that growth during the Industrial Revolution was not particularly fast and that growth did not slow down when the Industrial Revolution ended—what was really so revolutionary about the period? There are two answers. First, the technologies introduced during the Industrial Revolution were indeed revolutionary, but their immediate impact on economic growth was small because they were initially confined to a few industries. More significantly, the Industrial Revolution was a beginning. Rapid technological change, the replacement of old production processes with new ones, the continuous introduction of new goods—all of these processes that we take for granted today got their start during the Industrial Revolution. Although the actual growth rates achieved during this period do not look revolutionary in retrospect, the pattern of continual growth that began then was indeed revolutionary in contrast to what had come before.

I think it's a bit unclear, then, how to think about AI progress that's at least as transformative as the IR. If economic growth rates radically increase in the future, then we might apply the label "transformative AI" to the the period where the change in growth rates becomes clear. But it's also possible that growth rates won't ultimately go up that much. Maybe the trend in the labor force participation rate is the one to look at, since there's a good chance it will eventually decline to nearly zero; but it's also possible the decline will be really protracted, without a particularly clean pivot.

None of this is an answer to your question, of course. (I will probably circle back and try to give you a probability later.) But I am sort of wary of "transformative AI" as a forecasting target; if I was somehow given access to a video recording of the future of AI, I think it's possible I would have a lot of trouble labeling the decade where "AI progress as transformative as the Industrial Revolution" has been achieved.

What is your probability for the more modest claim that AI will be at least as transformative as, say, electricity or railroads?

Also a little bit tricky, partly because electricity underlies AI. As one operationalization, then, suppose we were to ask an economist in 2100: "Do you think that the counterfactual contribution of AI to American productivity growth between 2010 and 2100 was at least as large as the counterfactual contribution of electricity to American productivity growth between 1900 and 1940?" I think that the economist would probably agree -- let's say, 50% < p < 75% -- but I don't have a very principled reason for thinking this and might change my mind if I thought a bit more.

Comment by ben-garfinkel on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-19T14:30:39.023Z · score: 4 (3 votes) · EA · GW

From a long-termist perspective, I think that -- the more gradual AI progress is -- the more important concerns about "bad attractor states" and "instability" become relative to concerns about AI safety/alignment failures. (See slides).

I think it is probably true, though, that AI safety/alignment risk is more tractable than these other risks. To some extent, the solution to safety risk is for enough researchers to put their heads down and work really hard on technical problems; there's probably some amount of research effort that would be enough, even if this quantity is very large. In contrast, the only way to avoid certain risks associated with "bad attractor states" might be to establish stable international institutions that are far stronger than any that have come before; there might be structural barriers, here, that no amount of research effort or insight would be enough to overcome.

I think it's at least plausible that the most useful thing for AI safety and governance researchers to do ultimately focus on brain-in-a-box-ish AI risk scenarios, even they're not very likely relative to other scenarios. (This would still entail some amount of work that's useful for multiple scenarios; there would also be instrumental reasons, related to skill-building and reputation-building, to work on present-day challenges.) But I have some not-fully-worked-out discomfort with this possibility.

One thing that I do feel comfortable saying is that more effort should go into assessing the tractability of different influence pathways, the likelihood of different kinds of risks beyond the classic version of AI risk, etc.

Comment by ben-garfinkel on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-19T14:10:08.973Z · score: 11 (5 votes) · EA · GW

I would be really interested in you writing on that!

It's a bit hard to say what the specific impact would be, but beliefs about the magnitude of AI risk of course play at least an implicit role in lots of career/research-focus/donation decisions within the EA community; these beliefs also affect the extent to which broad EA orgs focus on AI risk relative to other cause areas. And I think that people's beliefs about the Sudden Emergence hypothesis at least should have a large impact in their level of doominess about AI risk; I regard it as one of the biggest cruxes. So I'd at least be hopeful that, if everyone's credences in Sudden Emergence changed by a factor of three, this had some sort of impact on the portion of EA attention devoted to AI risk. I think that credences in the Sudden Emergence hypothesis should also have an impact on the kinds of risks/scenarios that people within the AI governance and safety communities focus on.

I don't, though, have a much more concrete picture of the influence pathway.

Comment by ben-garfinkel on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-19T13:55:58.265Z · score: 5 (3 votes) · EA · GW

I think the work is mainly useful for EA organizations making cause prioritization decisions (how much attention should they devote to AI risk relative to other cause areas?) and young/early-stage people deciding between different career paths. The idea is mostly to help clarify and communicate the state of arguments, so that more fully informed and well-calibrated decisions can be made.

A couple other possible positive impacts:

  • Developing and shifting to improved AI risk arguments -- and publicly acknowledging uncertainties/confusions -- may, at least in the long run, cause other people to take the EA community and existential-risk-oriented AI safety communities more seriously. As one particular point, I think that a lot of vocal critics (e.g. Pinker) are mostly responding to the classic arguments. If the classic arguments actually have significant issues, then it's good to acknowledge this; if other arguments (e.g. these) are more compelling, then it's good to work them out more clearly and communicate them more widely. As another point, I think that sharing this kind of work might reduce perceptions that the EA is more group-think-y/unreflective than it actually is. I know that people have sometimes pointed to my EAG talk from a couple years back, for example, in response to concerns that the EA community is too uncritical in its acceptance of AI risk arguments.

  • I think that it's probably useful for the AI safety community to have a richer and more broadly shared understanding of different possible "AI risk threat models"; presumably, this would feed into research agendas and individual prioritization decisions to some extent. I think that work that analyzes newer AI risk arguments, especially, would be useful here. For example, it seems important to develop a better understanding of the role that "mesa-optimization" plays in driving existential risk.

(There's also the possibility of negative impact, of course: focusing too much on the weaknesses of various arguments might cause people to downweight or de-prioritize risks more than they actually should.)

I haven't thought very much about the timelines of which this kind of work is useful, but I think it's plausible that the delayed impact on prioritization and perception is more important than the immediate impact.

Comment by ben-garfinkel on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-19T12:55:08.069Z · score: 20 (10 votes) · EA · GW

I feel that something went wrong, epistemically, but I'm not entirely sure what it was.

My memory is that, a few years ago, there was a strong feeling within the longtermist portion of the EA community that reducing AI risk was far-and-away the most urgent problem. I remember there being a feeling that the risk was very high, that short timelines were more likely than not, and that the emergence of AGI would likely be a sudden event. I remember it being an open question, for example, whether it made sense to encourage people to get ML PhDs, since, by the time they graduated, it might be too late. There was also, in my memory, a sense that all existing criticisms of the classic AI risk arguments were weak. It seemed plausible that the longtermist EA community would pretty much just become an AI-focused community. Strangely, I'm a bit fuzzy on what my own views were, but I think they were at most only a bit out-of-step.

This might be an exaggerated memory. The community is also, obviously, large enough for my experience to be significantly non-representative. (I'd be interested in whether the above description resonates with anyone else.) But, in any case, I am pretty confident that there's been a real shift in average views over the past three years: credences in discontinuous progress and very short timelines have decreased; people's concerns about AI have become more diverse; a broad portfolio approach to long-termism has become more popular; and, overall, there's less of a doom-y vibe.

One explanation for the shift, if it's real, is that the community has been rationally and rigorously responding to available evidence, and the available evidence has simply changed. I don't think this could be the whole explanation, though. As I wrote in response to another question, many of the arguments for continuous AI progress, which seem to have had a significant impact over the past couple years, could have been published more than a decade ago -- and, in some cases, were. An awareness of the differences between the ML paradigm and the "good-old-fashioned-AI" (GOFAI) paradigm has been another source of optimism, but ML had already largely overtaken GOFAI by the time Superintelligence was published. I also don't think that much novel evidence for long timelines has emerged over the past few years, beyond the fact that we still don’t have AGI.

It's possible that the community's updated views, including my own updated views, are wrong: but even in this case, there needs to have been an epistemic mishap somewhere down the line. (The mishap would just be more recent.) I'm unfortunately pretty unsure of what actually happened. I do think that more energy should have gone into critiquing the classic AI risk arguments, porting them into the ML paradigm, etc., in the few years immediately after Superintelligence was published, and I do think that there's been too much epistemic deference within the community. As Asya pointed out in a comment on this post, I think that misperception has also been an important issue: people have often underestimated how much uncertainty and optimism prominent community members actually have about AI risk. Another explanation -- although this isn’t a very fundamental explanation -- is that, over the past few years, many people with less doom-y views have entered the community and had an influence. But I’m still confused, overall.

I think that studying and explaining the evolution of views within the community would be an interesting and valuable project in its own right.

[[As a side note, partly in response to below comment: It’s possible that the community has still made pretty much the right prioritization decisions over the past few years, even if there have been significant epistemic mistakes. Especially since AI safety/governance were so incredibly neglected in 2017, I’m less confident that the historical allocation of EA attention/talent/money to AI risk has actually substantially overshot the optimal level. We should still be nervous, though, if it turns out that the right decisions were made despite significantly miscalibrated views within the community.]]

Comment by ben-garfinkel on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-18T23:34:54.140Z · score: 5 (5 votes) · EA · GW

How entrenched do you think are old ideas about AI risk in the AI safety community? Do you think that it's possible to have a new paradigm quickly given relevant arguments?

I actually don't think they're very entrenched!

I think that, today, most established AI researchers have fairly different visions of the risks from AI -- and of the problems that they need to solve -- than the primary vision discussed in Superintelligence and in classic Yudkowsky essays. When I've spoken to AI safety researchers about issues with the "classic" arguments, I've encountered relatively low levels of disagreement. Arguments that heavily emphasize mesa-optimization or arguments that are more in line with this post seem to be more influential now. (The safety researchers I know aren't a random sample, though, so I'd be interested in whether this sounds off to anyone in the community.)

I think that "classic" ways of thinking about AI risk are now more prominent outside the core AI safety community than they are within it. I think that they have an important impact on community beliefs about prioritization, on individual career decisions, etc., but I don't think they're heavily guiding most of the research that the safety community does today.

(Unfortunately, I probably don't make this clear in the podcast.)

Comment by ben-garfinkel on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-18T23:15:50.752Z · score: 15 (7 votes) · EA · GW

I agree with Aidan's suggestion that Human Compatible is probably the best introduction to risks from AI (for both non-technical readers and readers with CS backgrounds). It's generally accessible and engagingly written, it's up-to-date, and it covers a number of different risks. Relative to many other accounts, I think it also has the virtue of focusing less on any particular development scenario and expressing greater optimism about the feasibility of alignment. If someone's too pressed for time to read Human Comptabile, the AI risk chapter in The Precipice would then be my next best bet. Another very readable option, mainly for non-CS people, would be the AI risk chapters in The AI Does Not Hate You: I think they may actually be the cleanest distillation of the "classic" AI risk argument.

For people with CS backgrounds, hoping for a more technical understanding of the problems safety/alignment researchers are trying to solve, I think that Concrete Problems in AI Safety, Scalable Agent Alignment Via Reward Modeling, and Rohin Shah's blog post sequence on "value learning" are especially good picks. Although none of these resources frames safety/alignment research as something that's intended to reduce existential risks.

I think that AI Governance: A Research Agenda would be the natural starting point for social scientists, especially if they have a substantial interest in risks beyond alignment.

Of course, for anyone interested in digging into arguments around AI risk, I think that Superintelligence is still a really important read. (Even beyond its central AI risk argument, it also has a ton of interesting ideas on the future of intelligent life, ethics, and the strategic landscape that other resources don't.) But it's not where I think people should start.

Comment by ben-garfinkel on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-18T22:37:24.499Z · score: 3 (3 votes) · EA · GW

You say you disagree with the idea that the day when we create AGI acts as a sort of 'deadline', and if we don't figure out alignment before then we're screwed.

A lot of your argument is about how increasing AI capability and alignment are intertwined processes, so that as we increase an AI's capabilities we're also increasing its alignment. You discuss how it's not like we're going to create a super powerful AI and then give it a module with its goals at the end of the process.

I agree with that, but I don't see it as substantially affecting the Bostrom/Yudkowsky arguments.

Isn't the idea that we would have something that seemed aligned as we were training it (based on this continuous feedback we were giving it), but then only when it became extremely powerful we'd realize it wasn't actually aligned?

I think there are a couple different bits to my thinking here, which I sort of smush together in the interview.

The first bit is that, when developing an individual AI system, its goals and capabilities/intelligence tend to take shape together. This is helpful, since it increases the odds that we'll notice issues with the system's emerging goals before they result in truly destructive behavior. Even if someone didn't expect a purely dust-minimizing house-cleaning robot to be a bad idea, for example, they'll quickly realize their mistake as they train the system. The mistake will be clear well before the point when the simulated robot learns how to take over the world; it will probably be clear even before the point when the robot learns how to operate door knobs.

The second bit is that there are many contexts in which pretty much any possible hand-coded reward function will either quickly reveal itself as inappropriate or be obviously inappropriate before the training process even begins. This means that sane people won’t proceed in developing and deploying things like house-cleaning robots or city planners until they’ve worked out alignment techniques to some degree; they’ll need to wait until we’ve moved beyond “hand-coding” preferences, toward processes that more heavily involve ML systems learning what behaviors users or developers prefer.

It’s still conceivable that, even given these considerations, people will still accidentally develop AI systems that commit omnicide (or cause similarly grave harms). But the likelihood at least goes down. First of all, it needs to be the case that (a): training processes that use apparently promising alignment techniques will still converge on omnicidal systems. Second, it needs to be the case that (b): people won’t notice that these training processes have serious issues until they’ve actually made omnicidal AI systems.

I’m skeptical of both (a) and (b). My intuition, regarding (a), is that some method that involves learning human preferences would need to be really terrible to result in systems that are doing things on the order of mass murder. Although some arguments related to mesa-optimization may push against this intuition.

Then my intuition, regarding (b), is that the techniques would likely display serious issues before anyone creates a system capable of omnicide. For example, if these techniques tend to induce systems to engage in deceptive behaviors, I would expect there to be some signs that this is an issue early on; I would expect some failed or non-catastrophic acts of deception to be observed first. However, again, my intuition is closely tied to my expectation that progress will be pretty continuous. A key thing to keep in mind about highly continuous scenarios is that there’s not just one single consequential ML training run, where the ML system might look benign at the start but turn around and take over the world at the end. We’re instead talking about countless training runs, used to develop a wide variety of different systems of intermediate generality and competency, deployed across a wide variety of domains, over a period of multiple years. We would have many more opportunities to notice issues with available techniques than we would in a “brain in a box” scenario. In a more discontinuous scenario, the risk would presumably be higher.

This seems to be a disagreement about "how hard is AI alignment?".

This might just be a matter of semantics, but I don’t think “how hard is AI alignment?” is the main question I have in mind here. I’m mostly thinking about the question of whether we’ll unwittingly create existentially damaging systems, if we don’t work out alignment techniques first. For example, if we don’t know how to make benign house cleaners, city planners, or engineers by year X, will we unwittingly create omnicidal systems instead? Certainly, the harder it is to work out alignment techniques, the higher the risks become. But it’s possible for accident risk to be low even if alignment techniques are very hard to work out.

Comment by ben-garfinkel on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-18T18:51:59.511Z · score: 3 (2 votes) · EA · GW

It seems that even in a relatively slow takeoff, you wouldn't need that big of a discontinuity to result in a singleton AI scenario. If the first AGI that's significantly more generally intelligent than a human is created in a world where lots of powerful narrow AIs exist, wouldn't having a super smart thing at the center of control of a bunch of narrow AI tools plausibly be way more powerful than having human brains at the center of that control?

It seems plausible that in a "smooth" scenario the time between when the first group created AGI and the second group creating an equally powerful one could be months apart. Do you think a months-long discontinuity is not enough for an AGI to pull sufficiently ahead?

I would say that, in a scenario with relatively "smooth" progress, there's not really a clean distinction between "narrow" AI systems and "general" AI systems; the line between "we have AGI" and "we don't have AGI" is either a bit blurry or a bit arbitarily drawn. Even if the management/control of large collections of AI systems is eventually automated, I would also expect this process of automation to unfold over time rather than happening in single go.

In general, the smoother things are, the harder it is to tell a story where one group gets out way ahead of others. Although I'm unsure just how "unsmooth" things need to be for this outcome to be plausible.

Even if multiple groups create AGIs within a short time, isn't having a bunch of unaligned AGIs all trying to get power at the same time also an existential risk? It doesn't seem clear that they'd automatically keep each other in check. One might simply be better at growing or better at sabotaging other AIs. Or if they reach a stalemate they might start cooperating with each other to achieve unaligned goals as a compromise.

I think that if there were multiple AGI or AGI-ish systems in the world, and most of them were badly misaligned (e.g. willing to cause human extinction for instrumental reasons), this would present an existential risk. I wouldn't count on them balancing each other out, in the same way that endangered gorilla populations shouldn't count on warring communities to balance each other out.

I think the main benefits of smoothness have to do with risk awareness (e.g. by observing less catastrophic mishaps) and, especially, with opportunities for trial-and-error learning. At least when the concern is misalignment risk, I don't think of the decentralization of power as a really major benefit in its own right: the systems in this decentralized world still mostly need to be safe.

My model is: if you have a central control unit (a human brain, or group of human brains) who is deciding how to use a bunch of narrow AIs, then if you replace that central control unit with one that it more intelligent / fast acting, the whole system will be more effective.

The only way I can think of where that wouldn't be true would be if the general AI required so many computational resources that the narrow AIs that were acting as tools of the AGI were crippled by lack of resources. Is that what you're imagining?

I think it's plausible that especially general systems would be especially useful for managing the development, deployment, and interaction of other AI systems. I'm not totally sure this is the case, though. For example, at least in principle, I can imagine an AI system that is good at managing the training of other AI systems -- e.g. deciding how much compute to devote to different ongoing training processes -- but otherwise can't do much else.

Comment by ben-garfinkel on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-18T14:46:31.867Z · score: 6 (4 votes) · EA · GW

Hi Elliot,

Thanks for all the questions and comments! I'll answer this one in stages.

On your first question:

Do you agree that the goodness of this analogy is roughly proportional to how slow our AI takeoff is? For instance if the first AGI ever created becomes more powerful than the rest of the world, then it seems that anyone who influenced the properties of this AGI would have a huge impact on the future.

I agree with this.

To take the fairly extreme case of the Neolithic Revolution, I think that there are at least a few reasons why groups at the time would have had trouble steering the future. One key reason is what the world was highly "anarchic," in the international relations sense of the term: there were many different political communities, with divergent interests and a limited ability to either coerce one another or form credible commitments. One result of anarchy is that, if the adoption of some technology or cultural/institutional practice would give some group an edge, then it's almost bound to be adopted by some group at some point: other groups will need to either lose influence or adopt the technology/innovation to avoid subjugation. This explains why the emergence and gradual spread of agricultural civilization was close to inevitable, even though (there's some evidence) people often preferred the hunter-gatherer way of life. There was an element of technological or economic determinism that put the course of history outside of any individual group's control (at least to a significant degree).

Another issue, in the context of the Neolithic Revolution, is that norms, institutions, etc., tend to shift over time, even in there aren't very strong selection pressures. This was even more true before the advent of writing. So we do have a few examples of religious or philosophical traditions that have stuck around, at least in mutated forms, for a couple thousand years; but this is unlikely, in any individual case, and would have been even more unlikely 10,000 years ago. At least so far, we also don't have examples of more formal political institutions (e.g. constitutions) that have largely stuck around for more than few thousand years either.

There are a couple reasons why AI could be different. The first reason is that -- under certain scenarios, especially ones with highly discontinuous and centralized progress -- it's perhaps more likely that one political community will become much more powerful than all others and thereby make the world less "anarchic." Another is that, especially if the world is non-anarchic, values and institutions might naturally be more stable in a heavily AI-based world. It seems plausible that humans will eventually step almost completely out of the loop, even if they don't do this immediately after extremely high levels of automation are achieved. At this point, if one particular group has disproportionate influence over the design/use of existing AI systems, then that one group might indeed have a ton of influence over the long-run future.