Posts

Altruistic equity allocation 2019-10-16T05:54:49.426Z · score: 79 (30 votes)
Ought: why it matters and ways to help 2019-07-26T01:56:34.037Z · score: 52 (24 votes)
Donor lottery details 2017-01-11T00:52:21.116Z · score: 22 (22 votes)
Integrity for consequentialists 2016-11-14T20:56:27.585Z · score: 39 (35 votes)
What is up with carbon dioxide and cognition? An offer 2016-04-06T01:18:03.612Z · score: 11 (13 votes)
Final Round of the Impact Purchase 2015-12-16T20:28:45.709Z · score: 4 (6 votes)
Impact purchase round 3 2015-06-16T17:16:12.858Z · score: 3 (3 votes)
Impact purchase: changes and round 2 2015-04-20T20:52:29.894Z · score: 3 (3 votes)
$10k of Experimental EA Funding 2015-02-25T19:54:29.881Z · score: 19 (19 votes)
Economic altruism 2014-12-05T00:51:44.715Z · score: 5 (7 votes)
Certificates of impact 2014-11-11T05:22:42.438Z · score: 29 (16 votes)
On Progress and Prosperity 2014-10-15T07:03:21.055Z · score: 34 (33 votes)
The best reason to give later 2013-06-14T04:00:31.000Z · score: 2 (1 votes)
Giving now vs. later 2013-03-12T04:00:04.000Z · score: 0 (2 votes)
Risk aversion and investment (for altruists) 2013-02-28T05:00:34.000Z · score: 3 (3 votes)
Why might the future be good? 2013-02-27T05:00:49.000Z · score: 2 (2 votes)
Replaceability 2013-01-22T05:00:52.000Z · score: 1 (1 votes)

Comments

Comment by paul_christiano on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-26T01:59:48.704Z · score: 15 (9 votes) · EA · GW
The conclusion I draw from this is that many EAs are probably worried about CC but are afraid to talk about it publicly because in CC you can get canceled for talking about CC, except of course to claim that it doesn't exist. (Maybe they won't be canceled right away, but it will make them targets when cancel culture gets stronger in the future.) I believe that the social dynamics leading to development of CC do not depend on the balance of opinions favoring CC, and only require that those who are against it are afraid to speak up honestly and publicly (c.f. "preference falsification"). That seems to already be the situation today.

It seems possible to me that many institutions (e.g. EA orgs, academic fields, big employers, all manner of random FB groups...) will become increasingly hostile to speech or (less likely) that they will collapse altogether.

That does seem important. I mostly don't think about this issue because it's not my wheelhouse (and lots of people talk about it already). Overall my attitude towards it is pretty similar to other hypotheses about institutional decline. I think people at EA orgs have way more reasons to think about this issue than I do, but it may be difficult for them to do so productively.

If someone convinced me to get more pessimistic about "cancel culture" then I'd definitely think about it more. I'd be interested in concrete forecasts if you have any. For example, what's the probability that making pro-speech comments would itself be a significant political liability at some point in the future? Will there be a time when a comment like this one would be a problem?

Looking beyond the health of existing institutions, it seems like most people I interact with are still quite liberal about speech, including a majority of people who I'd want to work with, socialize with, or take funding from. So hopefully the endgame boils down to freedom of association. Some people will run a strategy like "Censure those who don't censure others for not censuring others for problematic speech" and take that to its extreme, but the rest of the world will get along fine without them and it's not clear to me that the anti-speech minority has anything to do other than exclude people they dislike (e.g. it doesn't look like they will win elections).

in CC you can get canceled for talking about CC, except of course to claim that it doesn't exist. (Maybe they won't be canceled right away, but it will make them targets when cancel culture gets stronger in the future.)

I don't feel that way. I think that "exclude people who talk openly about the conditions under which we exclude people" is a deeply pernicious norm and I'm happy to keep blithely violating it. If a group excludes me for doing so, then I think it's a good sign that the time had come to jump ship anyway. (Similarly if there was pressure for me to enforce a norm I disagreed with strongly.)

I'm generally supportive of pro-speech arguments and efforts and I was glad to see the Harper's letter. If this is eventually considered cause for exclusion from some communities and institutions then I think enough people will be on the pro-speech side that it will be fine for all of us.

I generally try to state my mind if I believe it's important, don't talk about toxic topics that are unimportant, and am open about the fact that there are plenty of topics I avoid. If eventually there are important topics that I feel I can't discuss in public then my intention is to discuss them.

I would only intend to join an internet discussion about "cancellation" in particularly extreme cases (whether in terms of who is being canceled, severe object-level consequences of the cancellation, or the coercive rather than plausibly-freedom-of-association nature of the cancellation).

Comment by paul_christiano on Does Economic History Point Toward a Singularity? · 2020-09-09T20:03:38.530Z · score: 7 (4 votes) · EA · GW

Thanks, super helpful.

(I don't really buy an overall take like "It seems unlikely" but it doesn't feel that mysterious to me where the difference in take comes from. From the super zoomed out perspective 1200 AD is just yesterday from 1700AD, it seems like random fluctuations over 500 years are super normal and so my money would still be on "in 500 years there's a good chance that China would have again been innovating and growing rapidly, and if not then in another 500 years it's reasonably likely..." It makes sense to describe that situation as "nowhere close to IR" though. And it does sound like the super fast growth is a blip.)

Comment by paul_christiano on Does Economic History Point Toward a Singularity? · 2020-09-09T14:56:59.169Z · score: 7 (4 votes) · EA · GW

I took numbers from Wikipedia but have seen different numbers that seem to tell the same story although their quantitative estimates disagree a ton.

The first two numbers are all higher than growth rates could have plausibly been in a sustained way during any previous part of history (and the 0-1000AD one probably is as well), and they seem to be accelerating rather than returning to a lower mean (as must have happened during any historical period of similar growth).

My current view is that China was also historically unprecedented at that time and probably would have had an IR shortly after Europe. I totally agree that there is going to be some mechanistic explanation for why europe caught up with and then overtook china, but from the perspective of the kind of modeling we are discussing I feel super comfortable calling it noise (and expecting similar "random" fluctuations going forward that also have super messy contingent explanations).

Comment by paul_christiano on Does Economic History Point Toward a Singularity? · 2020-09-09T14:47:20.702Z · score: 12 (4 votes) · EA · GW

If one believed the numbers on wikipedia, it seems like Chinese growth was also accelerating a ton and it was not really far behind on the IR, such that I wouldn't except to be able to easily eyeball the differences.

If you are trying to model things at the level that Roodman or I are, the difference between 1400 and 1600 just isn't a big deal, the noise terms are on the order of 500 years at that point.

So maybe the interesting question is if and why scholars think that China wouldn't have had an IR shortly after Europe (i.e. within a few centuries, a gap small enough that it feels like you'd have to have an incredibly precise model to be justifiably super surprised).

Maybe particularly relevant: is the claimed population growth from 1700-1800 just catch-up growth to Europe? (more than doubling in 100 years! And over the surrounding time period the observed growth seems very rapid even if there are moderate errors in the numbers) If it is, how does that work given claims that Europe wasn't so far ahead by 1700? If it isn't, then how does the that not very strongly suggest incredible acceleration in China, given that it had very recently had some of the fastest growth in history and is then experience even more unprecedented growth? Is it a sequence of measurement problems that just happen to suggest acceleration?

Comment by paul_christiano on Does Economic History Point Toward a Singularity? · 2020-09-08T15:37:53.495Z · score: 15 (6 votes) · EA · GW
My model is that most industries start with fast s-curve like growth, then plateau, then often decline

I don't know exactly what this means, but it seems like most industries in the modern world are characterized by relatively continuous productivity improvements over periods of decades or centuries. The obvious examples to me are semiconductors and AI since I deal most with those. But it also seems true of e.g. manufacturing, agricultural productivity, batteries, construction costs. It seems like industries where the productivity vs time curve is a "fast S-curve" are exceptional, which I assume means we are somehow reading the same data differently. What kind of industries would you characterize this way?

(I agree that e.g. "adoption" is more likely to be an s-curve given that it's bounded, but productivity seems like the analogy for growth rates.)

Comment by paul_christiano on Does Economic History Point Toward a Singularity? · 2020-09-08T15:24:51.649Z · score: 5 (3 votes) · EA · GW

It feels like you are drawing some distinction between "contingent and complicated" and "noise." Here are some possible distinctions that seem relevant to me but don't actually seem like disagreements between us:

  • If something is contingent and complicated, you can expect to learn about it with more reasoning/evidence, whereas if it's noise maybe you should just throw up your hands. Evidently I'm in the "learn about it by reasoning" category since I spend a bunch of time thinking about AI forecasting.
  • If something is contingent and complicated, you shouldn't count on e.g. the long-run statistics matching the noise distribution---there are unmodeled correlations (both real and subjective). I agree with this and think that e.g. the singularity date distributions (and singularity probability) you get out of Roodman's model are not trustworthy in light of that (as does Roodman).

So it's not super clear there's a non-aesthetic difference here.

If I was saying "Growth models imply a very high probability of takeoff soon" then I can see why your doc would affect my forecasts. But where I'm at from historical extrapolations is more like "maybe, maybe not"; it doesn't feel like any of this should change that bottom line (and it's not clear how it would change that bottom line) even if I changed my mind everywhere that we disagree.

"Maybe, maybe not" is still a super important update from the strong "the future will be like the recent past" prior that many people implicitly have and I might otherwise take very seriously. It also leads me to mostly dismiss arguments like "this is obviously not the most important century since most aren't." But it mostly means that I'm actually looking at what is happening technologically.

You may be responding to writing like this short post where I say "We have been in a period of slowing growth for the last forty years. That’s a long time, but looking over the broad sweep of history I still think the smart money is on acceleration eventually continuing, and seeing something like [hyperbolic growth]...". I stand by the claim that this is something like the modal guess---we've had enough acceleration that the smart money is on it continuing, and this seems equally true on the revolutions model. I totally agree that any specific thing is not very likely to happen, though I think it's my subjective mode. I feel fine with that post but totally agree it's imprecise and this is what you get for being short.

The story with fossil fuels is typically that there was a pre-existing economic efflorescence that supported England's transition out of an 'organic economy.' So it's typically a sort of tipping point story, where other factors play an important role in getting the economy to the tipping point.

OK, but if those prior conditions led to a great acceleration before the purported tipping point, then I feel like that's mostly what I want to know about and forecast.

Supposing we had accurate data, it seems like the best approach is running a regression that can accommodate either hyperbolic or exponential growth — plus noise — and then seeing whether we can object the exponential hypothesis. Just noting that the growth rate must have been substantially higher than average within one particular millennium doesn’t necessarily tell us enough; there’s still the question of whether this is plausibly noise.

I don't think that's what I want to do. My question is, given a moment in history, what's the best way to guess whether and in how long there will be significant acceleration? If I'm testing the hypothesis "The amount of time before significant acceleration tends to be a small multiple of the current doubling time" then I want to look a few doublings ahead and see if things have accelerated, averaging over a doubling (etc. etc.), rather than do a regression that would indirectly test that hypothesis by making additional structural assumptions + would add a ton of sensitivity to noise.

You don’t need a story about why they changed at roughly the same time to believe that they did change at roughly the same time (i.e. over the same few century period). And my impression is that that, empirically, they did change at roughly the same time. At least, this seems to be commonly believed.
I don’t think we can reasonably assume they’re independent. Economic histories do tend to draw casual arrows between several of these differences, sometimes suggesting a sort of chain reaction, although these narrative causal diagrams are admittedly never all that satisfying; there’s still something mysterious here. On the other hand, higher population levels strike me as a fairly unsatisfying underlying cause.

It looked like you were listing those things to help explain why you have a high prior in favor of discontinuities between industrial and agricultural societies. "We don't know why those things change together discontinuously, they just do" seems super reasonable (though whether that's true is precisely what's at issue). But it does mean that listing out those factors adds nothing to the a priori argument for discontinuity.

Indeed, if you think that all of those are relevant drivers of growth rates then all else equal I'd think you'd expect more continuous progress, since all you've done is rule out one obvious way that you could have had discontinuous progress (namely by having the difference be driven by something that had a good prima facie reason to change discontinuously, as in the case of the agricultural revolution) and now you'll have to posit something mysterious to get to your discontinuous change.

Comment by paul_christiano on Does Economic History Point Toward a Singularity? · 2020-09-08T15:09:35.106Z · score: 6 (4 votes) · EA · GW

I think Roodman's model implies a standard deviation of around 500-1000 years for IR timing starting from 1000AD, but I haven't checked. In general for models of this type it seems like the expected time to singularity is a small multiple of the current doubling time, with noise also being on the order of the doubling time.

The model clearly underestimates correlations and hence the variance here---regardless of whether we go in for "2 revolutions" or "randomly spread out" we can all agree that a stagnant doubling is more likely to be followed by another stagnant doubling and vice versa, but the model treats them as independent.

(As one particular contingency you mention: It seems super plausible to me, especially, that if the Americas didn't turn out to exist, then the Industrial Revolution would have happened much later. But this seems like a pretty random/out-of-model fact about the world.)

This seems to suggest there are lots of civilizations like Europe-in-1700. But it seems to me that by this time (and so I believe before the Americas had any real effect) Europe's state of technological development was already pretty unprecedented. This is lot of what makes many of the claims about "here's why the IR happened" seem dubious to me.

My sense of that comes from: (i) in growth numbers people usually cite, Europe's growth was absurdly fast from 1000AD - 1700AD (though you may think those numbers are wrong enough to bring growth back to a normal level) (ii) it seems like Europe was technologically quite far ahead of other IR competitors.

I'm curious about your take. Is it that:

  • The world wasn't yet historically exceptional by 1700, there have been other comparable periods of rapid progress. (What are the historical analogies and how analogous do you think they are? Is my impression of technological sophistication wrong?)
  • 1700s Europe is quantitatively exceptional by virtue of being the furthest along example, but nevertheless there is a mystery to be explained about why it became even more exceptional rather than regressing to the mean (as historical exceptional-for-their-times civilizations had in the past). I don't currently see a mystery about this (given the level of noise in Roodman's model, which seems like it's going to be in the same ballpark as other reasonable models), but it may be because I'm not informed enough about those historical analogies.
  • Actually the IR may have been inevitable in 1700s Europe but the exact pace seems contingent. (This doesn't seem like a real tension with a continuous acceleration model.)
  • Actually the contingencies you have in mind were already driving the exceptional situation in 1700.
Comment by paul_christiano on Does Economic History Point Toward a Singularity? · 2020-09-07T23:01:09.249Z · score: 13 (5 votes) · EA · GW
I think that Hanson's "series of 3 exponentials" is the neatest alternative, although I also think it's possible that pre-modern growth looked pretty different from clean exponentials (even on average / beneath the noise). There's also a semi-common narrative in which the two previous periods exhibited (on average) declining growth rates, until there was some 'breakthrough' that allowed the growth rate to surge: I suppose this would be a "three s-curve" model. Then there's the possibility that the growth pattern in each previous era was basically a hard-to-characterize mess, but was constrained by a rough upper bound on the maximum achievable growth rate. This last possibility is the one I personally find most likely, of the non-hyperbolic possibilities.

It seems almost guaranteed that the data is a mess, it just seems like the only difference between the perspectives is "is acceleration fundamentally concentrated into big revolutions or is it just random and we can draw boundaries around periods of high-growth and call those revolutions?"

There may also be some fundamental meta-prior that matters, here, about the relative weight one ought to give to simple unified models vs. complex qualitative/multifactoral stories.

Which growth model corresponds to which perspective? I normally think of "'industry' is what changed and is not contiguous with what came before" as the single-factor model, and multifactor growth models tending more towards continuous growth.

A lot of my prior comes down to my impression that the dynamics of growth just *seem * very different to me for forager societies, agricultural/organic, and industrial/fossil-fuel societies.

I'm definitely much more sympathetic to the forager vs agricultural distinction.

Does a discontinuous change from fossil-fuel use even fit the data? It doesn't seem to add up at all to me (e.g. doesn't match the timing of acceleration, there are lots of industries that seemed to accelerate without reliance on fossil fuels, etc.), but would only consider a deep dive if someone actually wanted to stake something on that.

I don’t think the post-1500 data is too helpful help for distinguishing between the ‘long run trend’ and ‘few hundred year phase transition’ perspectives.
If there was something like a phase transition, from pre-modern agricultural societies to modern industrial societies, I don’t see any particular reason to expect the growth curve during the transition to look like the sum of two exponentials. (I especially don’t expect this at the global level, since diffusion dynamics are so messy.)

It feels to me like I'm saying: acceleration happens kind of randomly on a timescale roughly determined by the current growth rate. We should use the base rate of acceleration to make forecasts about the future, i.e. have a significant probability of acceleration during each doubling of output. (Though obviously the real model is more complicated and we can start deviating from that baseline, e.g. sure looks like we should have a higher probability of stagnation now given that we'e had decades of it.)

It feels to me like you are saying "No, we can have a richer model of historical acceleration that assigns significantly lower probability to rapid acceleration over the coming decades / singularity."

So to me it feels like as we add random stuff like "yeah there are revolutions but we don't have any prediction about what they will look like" makes the richer model less compelling. It moves me more towards the ignorant perspective of "sometimes acceleration happens, maybe it will happen soon?", which is what you get in the limit of adding infinitely many ex ante unknown bells and whistles to your model.

The papers typically suggest that the thing kicking off the growth surge, within a particular millennium, is the beginning of intensive agriculture in that region — so I don’t think the pivotal triggering event is really different.

Is "intensive agriculture" a well-defined thing? (Not rhetorical.) It didn't look like "the beginning of intensive agriculture" corresponds to any fixed technological/social/environmental event (e.g. in most cases there was earlier agriculture and no story was given about why this particular moment would be the moment), it just looked like it was drawn based on when output started rising faster.

I wouldn't necessarily say they were significantly faster. It depends a bit on exactly how you run this test, but, when I run a regression for "(dP/dt)/P = a*P^b" (where P is population) on the dataset up until 1700AD, I find that the b parameter is not significantly greater than 0. (The confidence interval is roughly -.2 to .5, with zero corresponding to exponential growth.)

I mean that if you have 5x growth from 0AD to 1700AD, and growth was at the same rate from 10000BC to 0AD, then you would expect 5^(10,000/1700) = 13,000-fold growth over that period. We have uncertainty about exactly how much growth there was in the prior period, but we don't have anywhere near that much uncertainty.

Doing a regression on yearly growth rates seems like a bad way to approach this. It seems like the key question is: did growth speed up a lot in between the agricultural and industrial revolutions? It seems like the way to pick that is to try to use points that are as spaced out as possible to compare growth rates in the beginning and late part of the interval from 10000BC to 1500AD. (The industrial revolution is usually marked much later, but for the purpose of the "2 revolutions" view I think you definitely need it to start by then.)

So almost all of the important measurement error is going to be in the bit of growth in the 0AD to 1500AD phase. If in fact there was only 2x growth in that period (say because the 0AD number was off by 50%) then that would only predict 100-fold growth from 10,000BC to 0AD, which is way more plausible.

The industrial era is, in comparison, less obviously different from the farming era, but it also seems pretty different. My list of pretty distinct features of pre-modern agricultural economies is: (a) the agricultural sector constituted the majority of the economy; (b) production and (to a large extent) transportation were limited by the availability of agricultural or otherwise ‘organic’ sources of energy (plants to power muscles and produce fertiliser); (c) transportation and information transmission speeds were largely limited by windspeed and the speed of animals; (d) nearly everyone was uneducated, poor, and largely unfree; (e) many modern financial, legal, and political institutions did not exist; (f) certain cultural attitudes (such as hatred of commerce and lack of belief in the possibility of progress) were much more common; and (g) scientifically-minded research and development projects played virtually no role in the growth process.

If you just keep listing things, it stops being a plausible source of a discontinuity---you then need to give some story for why your 7 factors all change at the same time. If they don't, e.g. if they just vary randomly, then you are going to get back to continuous change.

Comment by paul_christiano on Does Economic History Point Toward a Singularity? · 2020-09-07T20:31:05.913Z · score: 20 (8 votes) · EA · GW
because I have a bunch of very concrete, reasonably compelling sounding stories of specific things that caused the relevant shifts

Be careful that you don't have too many stories, or it starts to get continuous again.

More seriously, I don't know what the small # of factors are for the industrial revolution, and my current sense is that the story can only seem simple for the agricultural revolution because we are so far away and ignoring almost all the details.

It seems like the only factor that looks a priori like it should cause a discontinuity is the transition from hunting+gathering to farming, i.e. if you imagine "total food" as the sum of "food we make" and "food we find" then there could be a discontinuous change in growth rates as "food we make" starts to become large relative to "food we find" (which bounces around randomly but is maybe not really changing). This is blurred because of complementarity between your technology and finding food, but certainly I'm on board with an in-principle argument for a discontinuity as the new mode overtakes the old one.

For the last 10k years my impression is that no one has a very compelling story for discontinuities (put differently: they have way too many stories) and it's mostly a stylized empirical fact that the IR is kind of discontinuous. But I'm provisionally on board with Ben's basic point that we don't really have good enough data to know whether growth had been accelerating a bunch in the run-up to the IR.

To the extent things are discontinuous, I'd guess that it's basically from something similar to the agricultural case---there is continuous growth and random variation, and you see "discontinuities" in the aggregate if a smaller group is significantly outpacing the world, so that by the time they become a large part of the world they are growing significantly faster.

I think this is also reasonably plausible in the AI case (e.g. there is an automated part of the economy doubling every 1-2 years, by the time it gets to be 10% of the economy it's driving +5%/year growth, 1-2 years later it's driving +10% growth). But I think quantitatively given the numbers involved and the actual degree of complementarity, this is still unlikely to give you a fast takeoff as I operationalized it. I think if we're having a serious discussion about "takeoff" that's probably where the action is, not in any of the kinds of arguments that I dismiss in that post.

I find the "but X has fewer parameters" argument only mildly compelling, because I feel like other evidence about similar systems that we've observed should easily give us enough evidence to overcome the difference in complexity. 

I mean something much more basic. If you have more parameters then you need to have uncertainty about every parameter. So you can't just look at how well the best "3 exponentials" hypothesis fits the data, you need to adjust for the fact that this particular "3 exponentials" model has lower prior probability. That is, even if you thought "3 exponentials" was a priori equally likely to a model with fewer parameters, every particular instance of 3 exponentials needs to be less probable than every particular model with fewer parameters.

The thing that on the margin would feel most compelling to me for the continuous view is something like a concrete zoomed in story of how you get continuous growth from a bunch of humans talking to each other and working with each other over a few generations, that doesn't immediately abstract things away into high-level concepts like "knowledge" and "capital". 

As far as I can tell this is how basically all industries (and scientific domains) work---people learn by doing and talk to each other and they get continuously better, mostly by using and then improving on technologies inherited from other people.

It's not clear to me whether you are drawing a distinction between modern economic activity and historical cultural accumulation, or whether you feel like you need to see a zoomed-in version of this story for modern economic activity as well, or whether this is a more subtle point about continuous technological progress vs continuous changes in the rate of tech progress, or something else.

Comment by paul_christiano on Does Economic History Point Toward a Singularity? · 2020-09-07T16:27:31.906Z · score: 57 (22 votes) · EA · GW

This would be an important update for me, so I'm excited to see people looking into it and to spend more time thinking about it myself.

High-level summary of my current take on your document:

  • I agree that the 1AD-1500AD population data seems super noisy.
  • Removing that data removes one of the datapoints supporting continuous acceleration (the acceleration between 10kBC - 1AD and 1AD-1500AD) and should make us more uncertain in general.
  • It doesn't have much net effect on my attitude towards continuous acceleration vs discontinuous jumps, this mostly pushes us back towards our prior.
  • I'm not very moved by the other evidence/arguments in your doc.

Here's how I would summarize the evidence in your document:

  • Much historical data is made up (often informed by the author's models of population dynamics), so we can't use it to estimate historical growth. This seems like the key point.
  • In particular, although standard estimates of growth from 1AD to 1500AD are significantly faster than growth between 10kBC and 1AD, those estimates are sensitive to factor-of-1.5 error in estimates of 1AD population, and real errors could easily be much larger than that.
  • Population levels are very noisy (in addition to population measurement being noisy) making it even harder to estimate rates.
  • Radiographic data often displays isolated periods of rapid growth from 10,000BC to 1AD and it's possible that average growth rates were something like 2000 year doubling. So even if 500-2000 year doubling times are accurate from 1AD to 1500, those may not be a deviation from the preceding period.
  • You haven't looked into the claims people have made about growth from 100kya to 10kya, but given what we know about measurement error from 10kya to now, it seems like the 100kya-10kya data is likely to be way too noisy to say anything about.

Here's my take in more detail:

  • You are basically comparing "Series of 3 exponentials" to a hyperbolic growth model. I think our default simple hyperbolic growth model should be the one in David Roodman's report (blog post), so I'm going to think about this argument as comparing Roodman's model to a series of 3 noisy exponentials. In your doc you often dunk on an extremely low-noise version of hyperbolic growth but I'm mostly ignoring that because I absolutely agree that population dynamics are very noisy.
  • It feels like you think 3 exponentials is the higher prior model. But this model has many more parameters to fit the data, and even ignoring that "X changes in 2 discontinuous jumps" doesn't seem like it has a higher prior than "X goes up continuously but stochastically." I think the only reason we are taking 3 exponentials seriously is because of the same kind of guesswork you are dismissive of, namely that people have a folk sense that the industrial revolution and agricultural revolutions were discrete changes. If we think those folk senses are unreliable, I think that continuous acceleration has the better prior. And at the very least we need to be careful about using all the extra parameters in the 3-exponentials model, since a model with 2x more parameters should fit the data much better.
  • On top of that, the post-1500 data is fit terribly by the "3 exponentials" model. Given that continuous acceleration very clearly applies in the only regime where we have data you consider reliable, and given that it already seemed simpler and more motivated, it seems pretty clear to me that it should have the higher prior, and the only reason to doubt that is because of growth folklore. You can't have it both ways in using growth folklore to promote this hypothesis to attention and then dismissing the evidence from growth folklore because it's folklore.
  • On the acceleration model, the periods from 1500-2000, 10kBC-1500, and "the beginning of history to 10kBC" are roughly equally important data (and if that hypothesis has higher prior I don't think you can reject that framing). Changes within 10kBC - 1500 are maybe 1/6th of the evidence, and 1/3 of the relevant evidence for comparing "continuous acceleration" to "3 exponentials." I still think it's great to dig into one of these periods, but I don't think it's misleading to present this period as only 1/3 of the data on a graph.
  • (Enough about priors, onto the data.)
  • I think that the key claim is that the 1AD-1500AD data is mostly unreliable. Without this data, we have very little information about acceleration from 10kBC - 1500AD, since the main thing we actually knew was that 1AD-1500AD must have been faster than the preceding 10k years. I'd like to look into that more, but it looks super plausible to me that the noise is 2x or more for 1AD which is enough to totally kill any inference about growth rates. So provisionally I'm inclined to accept your view there.
  • That basically removes 1 datapoint for the continuous acceleration story and I totally agree it should leave us more uncertain about what's going on. That said, throwing out all the numbers from that period also removes one of the main quantitative datapoints against continuous acceleration [ETA: the other big one being the modern "great stagnation," both of these are in the tails of the continuous acceleration story and are just in the middle of the constant exponentials in the 3-exponential story, though see Robin Hanson's writeup to get a sense for what the series of exponentials view actually ends up looking like---it's still surprised by the great stagnation], and comes much closer to leaving us with our priors + the obvious acceleration over longer periods + the obvious acceleration during the shorter period where we actually have data, which seem to all basically point in the same direction.
  • Even taking the radiocarbon data as given I don't agree with the conclusions you are drawing from that data. It feels like in each case you are saying "a 2-exponential model fits fine" but the 2 exponentials are always different. The actual events (either technological developments or climate change or population dynamics) that are being pointed to as pivotal aren't the same across the different time series and so I think we should just be analyzing these without reference to those events (no suggestive dotted lines :) ). I spent some time doing this kind of curve fitting to various stochastic growth models and this basically looks to me like what individual realizations look like from such models--the extra parameters in "splice together two unrelated curves" let you get fine-looking fits even when we know that the underlying dynamics are continuous+stochastic.
  • I currently don't trust the population data coming from the radiocarbon dating. My current expectation is that after a deep dive I would not end up trusting the radiocarbon dating at all for tracking changes in the rate of population growth when the populations in question are changing how they live and what kinds of artifacts they make (from my perspective, that's what happened with the genetics data, which wasn't caveated so aggressively in the initial draft I reviewed). I'd love to hear from someone who actually knows about these techniques or has done a deep dive on these papers though.
  • I think the only dataset that you should expect to provide evidence on its own is the China population time series. But even there if you just take rolling averages and allow for a reasonable level of noise I think the continuous acceleration story looks fine. E.g. I think if you compare David Roodman's model with the piecewise exponential model (both augmented with measurement noise, and allowing you to choose noisy dynamics however you want for the exponential model), Roodman's model is going to fit the data better despite having fewer free parameters. If that's the case, I don't think this time series can be construed as evidence against that model.
  • I agree with the point that if growth is 0 before the agricultural revolution, rather than "small," then that would undermine the continuous acceleration story. I think prior growth was probably slow but non-zero, and this document didn't really update my view on that question.
Comment by paul_christiano on How Much Leverage Should Altruists Use? · 2020-05-16T16:30:18.425Z · score: 2 (1 votes) · EA · GW
This is only 2.4 standard deviations assuming returns follow a normal distribution, which they don't.

No, 2.4 standard deviations is 2.4 standard deviations.

It's possible to have distributions for which what's more or less surprising.

For a normal distribution, this happens about one every 200 periods. I totally agree that this isn't a factor of 200 evidence against your view. So maybe saying "falsifies" was too strong.

But no distribution is 2.35 standard deviations below its mean with probability more than 18%. That's literally impossible. And no distribution is 4 standard deviations below its mean with probability >6%. (I'm just adopting your variance estimates here, so I don't think you can really object.)

This is not directly relevant to the investment strategies I talked about above, but if you use the really simple (and well-supported) expected return model of earnings growth plus dividends plus P/E mean reversion and plug in the current numbers for emerging markets, you get 9-11% real return (Research Affiliates gives 9%, I've seen other sources give 11%). This is not a highly concentrated investment of 50 stocks—it's an entire asset class. So I don't think expecting a 9% return is insane.

Have you looked at backtests of this kind of reasoning for emerging markets? Not of total return, I agree that is super noisy, but just the basic return model? I was briefly very optimistic about EM when I started investing, based on arguments like this one, but then when I looked at the data it just seems like it doesn't work out, and there are tons of ways that emerging market companies could be less appealing for investors that could explain a failure of the model. So I ended up just following the market portfolio, and using much more pessimistic returns estimates.

I didn't look into it super deeply. Here's some even more superficial discussion using numbers I pulled while writing this comment.

Over the decade before this crisis, it seems like EM earnings yields were roughly flat around 8%. Dividend yield was <2%. Real dividends were basically flat. Real price return was slightly negative. And I think on top of all of that the volatility was significantly higher than US markets.

Why expect P/E mean reversion to rescue future returns in this case? It seems like EM companies have lots of on-paper earnings, but they neither distribute those to investors (whether as buybacks or dividends) nor use them to grow future earnings. So their current P/E ratios seem justified, and expecting +5%/year returns from P/E mean reversion seems pretty optimistic.

Like I said, I haven't looked into this deeply, so I'm totally open to someone pointing out that actually the naive return model has worked OK in emerging markets after correcting for some important non-obvious stuff (or even just walking through the above analysis more carefully), and so we should just take the last 10 years of underperformance as evidence that now is a particularly good time to get in. But right now that's not my best guess, much less strongly supported enough that I want to take a big anti-EMH position on it (not to mention that betting against beta is one of the factors that seems most plausible to me and seems best documented, and EM is on the other side of that trade).

which explain why the authors believe their particular implementations of momentum and value have (slightly) better expected return.

I'm willing to believe that, though I'm skeptical that they get enough to pay for their +2% fees.

I don't overly trust backtests, but I trust the process behind VMOT, which is (part of the) reason to believe the cited backtest is reflective of the strategy's long-term performance.[2] VMOT projected returns were based on a 20-year backtest, but you can find similar numbers by looking at much longer data series

The markets today are a lot different from the markets 20 years ago. The problem isn't just that the backtests are typically underpowered, it's that markets become more sophisticated, and everyone gets to see that data. You write:

RAFI believes the value and momentum premia will work as well in the future as they have in the past, and some of the papers I linked above make similar claims. They offer good support for this claim, but in the interest of conservatism, we could justifiably subtract a couple of percentage points from expected return to account for premium degradation.

Having a good argument is one thing---I haven't seen one but also haven't looked that hard, and I'm totally willing to believe that one exists and I think it's reasonable to invest on the basis of such arguments. I also believe that premia won't completely dry up because smart investors won't want the extra volatility if the returns aren't there (and lots of people chasing a premium will add premium-specific volatility).

But without a good argument, subtracting a few percentage points from backtested return isn't conservative. That's probably what you should do with a good argument.

Comment by paul_christiano on How Much Leverage Should Altruists Use? · 2020-04-25T03:07:22.176Z · score: 4 (2 votes) · EA · GW

I haven't done a deep dive on this but I think futures are better than this analysis makes them look.

Suppose that I'm in the top bracket and pay 23% taxes on futures, and that my ideal position is 2x SPY.

In a tax-free account I could buy SPY and 1x SPY futures, to get (2x SPY - 1x interest).

In a taxable account I can buy 1x SPY and 1.3x SPY futures. Then my after-tax expected return is again (2x SPY - 1x interest).

The catch is that if I lose money, some of my wealth will take the form of taxable losses that I can use to offset gains in future years. This has a small problem and a bigger problem:

  • Small problem: it may be some years before I can use up those taxable losses. So I'll effectively pay interest on the money over those years. If real rates were 2% and I had to wait 5 years on average to return to my high-water mark, then this would be an effective tax rate of (2% * 5 years) * (23%) ~ 2.3%. I think that's conservative, and this is mostly negligible.
  • Large problem: if the market goes down enough, I could be left totally broke, and my taxable losses won't do me any good. In particular, if the market went down 52%, then my 2x leveraged portfolio should be down to around 23% of my original net worth, but that will entirely be in the form of taxable losses (losing $100 is like getting a $23 grant, to be redeemed only once I've made enough taxable gains).

So I can't just treat my taxable losses as wealth for the purpose of computing leverage. I don't know exactly what the right strategy is, it's probably quite complicated.

The simplest solution is to just ignore them when setting my desired level of leverage. If you do that, and are careful about rebalancing, it seems like you shouldn't lose very much to taxes in log-expectation (e.g. if the market is down 50%, I think you'd end up with about half of your desired leverage, which is similar to a 25% tax rate). But I'd like to work it out, since other than this futures seem appealing.

Comment by paul_christiano on How Much Leverage Should Altruists Use? · 2020-04-23T21:23:24.989Z · score: 4 (3 votes) · EA · GW

I'm surprised by (and suspicious of) the claim about so many more international shares being non-tradeable, but it would change my view.

I would guess the savings rate thing is relatively small compared to the fact that a much larger fraction of US GDP is inevestable in the stock market---the US is 20-25% of GDP, but the US is 40% of total stock market capitalization and I think US corporate profits are also ballpark 40% of all publicly traded corporate profits. So if everyone saved the same amount and invested in their home country, US equities would be too cheap.

I agree that under EMH the two bonds A and B are basically the same, so it's neutral. But it's a prima facie reason that A is going to perform worse (not a prima facie reason it will perform better) and it's now pretty murky whether the market is going to err one way or the other.

I'm still pretty skeptical of US equities outperforming, but I'll think about it more.

I haven't thought about the diversification point that much. I don't think that you can just use the empirical daily correlations for the purpose of estimating this, but maybe you can (until you observe them coming apart). It's hard to see how you can be so uncertain about the relative performance of A and B, but still think they are virtually perfectly correlated (but again, that may just be a misleading intuition). I'm going to spend a bit of time with historical data to get a feel for this sometime and will postpone judgment until after doing that.

Comment by paul_christiano on How Much Leverage Should Altruists Use? · 2020-04-23T21:12:58.317Z · score: 4 (2 votes) · EA · GW

I also like GMP, and find the paper kind of surprising. I checked the endpoints stuff a bit and it seems like it can explain a small effect but not a huge one. My best guess is that going from equities to GMP is worth like +1-2% risk-free returns.

Comment by paul_christiano on How Much Leverage Should Altruists Use? · 2020-04-22T23:53:08.039Z · score: 28 (5 votes) · EA · GW

I like the basic point about leverage and think it's quite robust.

But I think the projected returns for VMOT+MF are insane. And as a result the 8x leverage recommendation is insane, someone who does that is definitely just going to go broke. (This is similar to Carl's complaint.)

My biggest problem with this estimate is that it kind of sounds crazy and I don't know very good evidence in favor. But it seems like these claimed returns are so high that you can also basically falsify them by looking at the data between when VMOT was founded and when you wrote this post.

VMOT is down 20% in the last 3 years. This estimate would expect returns of 27% +- 20% over that period, so you're like 2.4 standard deviations down.

When you wrote this post, before the crisis, VMOT was only like 1.4 standard deviations below your expectations. so maybe we should be more charitable?

But that's just because it was a period of surprisingly high market returns. VMOT lagged VT by more than 35% between its inception and when you wrote this post, whereas this methodology expects it to outperform by more than 12% over that period. VMOT/VT are positively correlated, and based on your numbers it looks like the stdev of excess performance should be <10%. So that's like 4-5 standard deviations of surprising bad performance already.

Is something wrong with this analysis?

If that's right, I definitely object to the methodology "take an absurd backtest that we've already falsified out of sample, then cut a few percentage points off and call it conservative." In this case it looks like even the "conservative" estimate is basically falsified.

Comment by paul_christiano on How Much Leverage Should Altruists Use? · 2020-04-22T23:22:19.139Z · score: 2 (1 votes) · EA · GW
We could account for this by treating mean return and standard deviation as distributions rather than point estimates, and calculating utility-maximizing leverage across the distribution instead of at a single point. This raises a further concern that we don’t even know what distribution the mean and standard deviation have, but at least this gets us closer to an accurate model.

Why not just take the actual mean and standard deviation, averaging across the whole distribution of models?

What exactly is the "mean" you are quoting, if it's not your subjective expectation of returns?

(Also, I think the costs of choosing leverage wrong are pretty symmetric.)

Comment by paul_christiano on How Much Leverage Should Altruists Use? · 2020-04-22T23:19:39.640Z · score: 8 (2 votes) · EA · GW

My understanding is that the sharpe ratio of the global portfolio is quite similar to the equity portfolio (e.g. see here for data on the period from 1960-2017, finding 0.36 for the global market and 0.37 for equities).

I still do expect the broad market to outperform equities alone, but I don't know where the super-high estimates for the benefits of diversification are coming from, and I expect the effect to be much more modest then the one described in the linked post by Ben Todd. Do you know what's up with the discrepancy? It could be about choice of time periods or some technical detail, but it's kind fo a big discrepancy. (My best guess is an error in the linked post.)

Comment by paul_christiano on How Much Leverage Should Altruists Use? · 2020-04-22T23:09:35.879Z · score: 5 (3 votes) · EA · GW
To use leverage, you will probably end up having to pay about 1% on top of short-term interest rates

Not a huge deal, but it seems like the typical overhead is about 0.3%:

  • This seems to be the implicit rate I pay if I buy equity futures rather than holding physical equities (a historical survey: http://cdar.berkeley.edu/wp-content/uploads/2016/12/futures-gunther-etal-111616.pdf , though you can also check yourself for a particular future you are considering buying, the main complication is factoring in dividend prices)
  • Wei Dai has recently been looking into box spread financing which were around 0.55% for 3 years, 0.3% above the short-term treasury rate.
  • If you have a large account, interactive brokers charges benchmark+0.3% interest.

I suspect risk-free + 0.3% is basically the going rate, though I also wouldn't be too surprised if a leveraged ETF could get a slightly better rate.

If you are leveraging as much as described in this post, it seems reasonably important to get at least an OK rate. 1% overhead is large enough that it claws back a significant fraction of the value from leverage (at least if you use more realistic return estimates).

Comment by paul_christiano on How Much Leverage Should Altruists Use? · 2020-04-22T22:40:12.316Z · score: 5 (3 votes) · EA · GW

I think it's pretty dangerous to reason "asset X has outperformed recently, so I expect it to outperform in the future." An asset can outperform because it's becoming more expensive, which I think is partly the case here.

This is most obvious in the case of bonds---if 30-year bonds from A are yielding 2%/year and then fall to 1.5%/year over a decade, while 30-year bonds from B are yielding 2%/year and stay at 2%/year, then it will look like the bonds from A are performing about twice as well over the decade. But this is a very bad reason to invest in A. It's anti-inductive not only because of EMH but for the very simple reason that return chasing leads you to buy high and sell low.

This is less straightforward with equities because earnings accounting is (much) less transparent than bond yields, but I think it's a reasonable first pass guess about what's going on (combined with some legitimate update about people becoming more pessimistic about corporate performance/governance/accounting outside of the US). Would be interested in any data contradicting this picture.

I do think that international equities will do worse than US equities after controlling for on-paper earnings. But they have significantly higher on-paper earnings, and I don't really see how to take a bet about which of these effects is larger without getting into way more nitty gritty about exactly what mistake we think which investors are making. If I had to guess I'd bet that US markets are salient to investors in many countries and their recent outperformance has made many people overweight them, so that they will very slightly underperform. But I'd be super interested in good empirical evidence on this front too.

(The RAFI estimates generally look a bit unreasonable to me, and I don't know of an empirical track record or convincing analysis that would make me like them more.)

I personally just hold the market portfolio. So I'm guaranteed to outperform the average of you and Michael Dickens, though I'm not sure which one of you is going to do better than me and which one is going to do worse.

Comment by paul_christiano on How worried should I be about a childless Disneyland? · 2019-10-31T20:44:45.070Z · score: 6 (3 votes) · EA · GW

My main point was that in any case what matters are the degree of alignment of the AI systems, and not their consciousness. But I agree with what you are saying.

If our plan for building AI depends on having clarity about our values, then it's important to achieve such clarity before we build AI---whether that's clarity about consciousness, population ethics, what kinds of experience are actually good, how to handle infinities, weird simulation stuff, or whatever else.

I agree consciousness is a big ? in our axiology, though it's not clear if the value you'd lose from saying "only create creatures physiologically identical to humans" is large compared to all the other value we are losing from the other kinds of uncertainty.

I tend to think that in such worlds we are in very deep trouble anyway and won't realize a meaningful amount of value regardless of how well we understand consciousness. So while I may care about them a bit from the perspective of parochial values (like "is Paul happy?") I don't care about them much from the perspective of impartial moral concerns (which is the main perspective where I care about clarifying concepts like consciousness).

Comment by paul_christiano on How worried should I be about a childless Disneyland? · 2019-10-30T16:35:57.465Z · score: 14 (11 votes) · EA · GW

I don't think it matters that much (for the long-term) if the AI systems we build in the next century are conscious. What matters is how they think about what possible futures they can bring about.

If AI systems are aligned with us, but turned out not to be conscious or not very conscious, then they would continue this project of figuring out what is morally valuable and so bring about a world we'd regard as good (even though it likely contains very few minds that resemble either us or them).

If AI systems are conscious but not at all aligned with us, then why think that they would create conscious and flourishing successors?

So my view is that alignment is the main AI issue here (and reflecting well is the big non-AI issue), with questions about consciousness being in the giant bag of complex questions we should try to punt to tomorrow.

Comment by paul_christiano on Conditional interests, asymmetries and EA priorities · 2019-10-22T16:29:04.735Z · score: 8 (4 votes) · EA · GW
Only Actual Interests: Interests provide reasons for their further satisfaction, but neither an interest nor its satisfaction provides reasons for the existence of that interest over its nonexistence.
It follows from this that a mind with no interests at all is no worse than a mind with interests, regardless of how satisfied its interests might have been. In particular, a joyless mind with no interest in joy is no worse than one with joy. A mind with no interests isn't much of a mind at all, so I would say that this effectively means it's no worse for the mind to not exist.

If you make this argument that "it's no worse for the joyful mind to not exist," you can make an exactly symmetrical argument that "it's not better for the suffering mind to not exist." If there was a suffering mind they'd have an interest in not existing, and if there was a joyful mind they'd have an interest in existing.

In either case, if there is no mind then we have no reason to care about whether the mind exists, and if there is a mind then we have a reason to act---in one case we prefer the mind exist, and in the other case we prefer the mind not exist.

To carry your argument you need an extra principle along the lines of "the existence of unfulfilled interests is bad." Of course that's what's doing all the work of the asymmetry---if unfulfilled interests are bad and fulfilled interests are not good, then existence is bad. But this has nothing to do with actual interests, it's coming from very explicitly setting the zero point at the maximally fulfilled interest.

Comment by paul_christiano on Conditional interests, asymmetries and EA priorities · 2019-10-22T16:20:26.216Z · score: 4 (2 votes) · EA · GW
A question here is whether "interests to not suffer" are analogous to "interests in experiencing joy." I believe that Michael's point is that, while we cannot imagine suffering without some kind of interest to have it stop (at least in the moment itself), we can imagine a mind that does not care for further joy.

I don't think that's the relevant analogy though. We should be comparing "Can we imagine suffering without an interest in not having suffered?" to "Can we imagine joy without an interest in having experienced joy?"

Let's say I see a cute squirrel and it makes me happy. Is it bad that I'm not in virtual reality experiencing the greatest joys imagineable?

I can imagine saying "no" here, but if I do then I'd also say it's not good that you are not in a virtual reality experiencing great suffering. If you were in a virtual reality experiencing great joy it would be against your interests to prevent that joy, and if you were in a virtual reality experiencing great suffering it would be in your interests to prevent that suffering.

You could say: the actually existing person has an interest in preventing future suffering, while they may have no interest in experiencing future joy. But now the asymmetry is just coming from the actual person's current interests in joy and suffering, so we didn't need to bring in all of this other machinery, we can just directly appeal to the claimed asymmetry in interests.

Comment by paul_christiano on Conditional interests, asymmetries and EA priorities · 2019-10-22T03:59:15.498Z · score: 12 (7 votes) · EA · GW
suffering by its very definition implies an interest in its absence, so there is a reason to prevent it.

If a mind exists and suffers, we'd think it better had it not existed (by virtue of its interest in not suffering). And if a mind exists and experiences joy, we'd think it worse had it not existed (by virtue of its interest in experiencing joy). Prima facie this seem exactly symmetrical, at least as far as the principles laid out here are concerned.

Depending on exactly how you make your view precise, I'd think that we'd either end up not caring at all about whether new minds exist (since if they didn't exist there'd be no relevant interests), or balancing the strength of those interests in some way to end up with a "zero" point where we are indifferent (since minds come with interests in both directions concerning their own existence). I don't yet see how you end up with the asymmetric view here.

Comment by paul_christiano on Altruistic equity allocation · 2019-10-17T15:28:33.017Z · score: 4 (3 votes) · EA · GW
would there be a specific metric (e.g. estimated QALYs saved) or would donors construct individual conversion rates (at least implicitly) based on their evaluations of how effective charities are likely to be over their lifetimes?

It would come down to donor predictions, and different donors will generally have quite different predictions (similar to for-profit investing). I agree there is a further difference where donors will also value different outputs differently.

One other advantage of not quantizing the individual contributions of employees is that they can sum up to more than 100% - all twenty employees of an organisation may each believe that they are responsible for at least 10% of its success, which is mathematically inconsistent but may be a useful fiction (and in some sense it could be true - there may be threshold effects such that if any individual employee left the impact of the organisation would actually be 10% worse) - if impact equity is explicitly parceled out, everyone's fractions will sum to 1.

I mostly consider this an advantage of quantifying :)

(I also think that impacts should sum to 1, not >1---in the sense that a project is worthwhile iff there is a way of allocating its impact that makes everyone happy, modulo the issue where you may need to separate impact into tranches for unaligned employees who value different parts of that impact.)

However, it might also lead to discontent if employees don't consider the impact equity allocations to be fair (whether between different employees, between employees and founders, or between employees and investors).

This seems like a real downside.

Comment by paul_christiano on The Future of Earning to Give · 2019-10-14T15:42:37.837Z · score: 33 (9 votes) · EA · GW
Of course, you could enter a donor lottery and, if you win, just give it all to an EA fund without doing any research yourself. I don't know if this would be better or worse than just donating directly to the EA funds.

It seems to me like this is unlikely to be worse. Is there some mechanism you have in mind? Risk-aversion for the EA fund? (Quantitatively that seems like it should matter very little at the scale of $100,000.)

At a minimum, it seems like the EA funds are healthier if their accountability is to a smaller number of larger donors who are better able to think about what they are doing.

In terms of upside from getting to think longer, I don't think it's at all obvious that most donors would decide on EA funds (or on whichever particular EA fund they initially lean towards). And as a norm, I think it's easy for EAs to argue that donor lotteries are an improvement over what most non-EA donors do, while the argument for EA funds comes down a lot to personal trust.

I don't think the argument for economies of scale really applies here, since the grantmakers are already working full-time on research in the areas they're making grants for.

I don't think all of the funds have grantmakers working fulltime on having better views about grantmaking. That said, you can't work fulltime if you win a $100,000 lottery either. I agree you are likely to come down to deciding whose advice to trust and doing meta-level reasoning.

Comment by paul_christiano on Are we living at the most influential time in history? · 2019-09-15T22:46:33.132Z · score: 48 (23 votes) · EA · GW

I think the outside view argument for acceleration deserves more weight. Namely:

  • Many measures of "output" track each other reasonably closely: how much energy we can harness, how many people we can feed, GDP in modern times, etc.
  • Output has grown 7-8 orders of magnitude over human history.
  • The rate of growth has itself accelerated by 3-4 orders of magnitude. (And even early human populations would have seemed to grow very fast to an observer watching the prior billion years of life.)
  • It's pretty likely that growth will accelerate by another order of magnitude at some point, given that it's happened 3-4 times before and faster growth seems possible.
  • If growth accelerated by another order of magnitude, a hundred years would be enough time for 9 orders of magnitude of growth (more than has occurred in all of human history).
  • Periods of time with more growth seem to have more economic or technological milestones, even if they are less calendar time.
  • Heuristics like "the next X years are very short relative to history, so probably not much will happen" seem to have a very bad historical track record when X is enough time for lots of growth to occur, and so it seems like a mistake to call them the "outside view."
  • If we go a century without doubling of growth rates, it will be (by far) the most that output has ever grown without significant acceleration.
  • Data is noisy and data modeling is hard, but it is difficult to construct a model of historical growth that doesn't have a significant probability of massive growth within a century.
  • I think the models that are most conservative about future growth are those where stable growth is punctuated by rapid acceleration during "revolutions" (with the agricultural acceleration around 10,000 years ago and the industrial revolution causing continuous acceleration from 1600-1900).
  • On that model human history has had two revolutions, with about two orders of magnitude of growth between them, each of which led to >10x speedup of growth. It seems like we should have a significant probability (certainly >10%) of another revolution occurring within the next order of magnitude of growth, i.e. within the next century.
Comment by paul_christiano on Ought: why it matters and ways to help · 2019-07-29T16:35:01.505Z · score: 10 (6 votes) · EA · GW

In-house.

Comment by paul_christiano on Age-Weighted Voting · 2019-07-15T15:45:14.972Z · score: 4 (2 votes) · EA · GW
I suspect many people responding to surveys about events which happened 10-30 years ago would be doing so with the aim of influencing the betting markets which affect near future policy.

It would be good to focus on questions for which that's not so bad, because our goal is to measure some kind of general sentiment in the future---if in the future people feel like "we should now do more/less of X" then that's pretty correlated with feeling like we did too little in the past (obviously not perfectly---we may have done too little 30 years ago but overcorrected 10 years ago---but if you are betting about public opinion in the US I don't think you should ever be thinking about that kind of distinction).

E.g. I think this would be OK for:

  • Did we do too much or too little about climate change?
  • Did we have too much or too little immigration of various kinds?
  • Were we too favorable or too unfavorable to unions?
  • Were taxes too high or too low?
  • Is compensating organ at market rates a good idea?

And so forth.

Comment by paul_christiano on Age-Weighted Voting · 2019-07-12T16:37:38.710Z · score: 78 (31 votes) · EA · GW

I like the goal of politically empowering future people. Here's another policy with the same goal:

  • Run periodic surveys with retrospective evaluations of policy. For example, each year I can pick some policy decisions from {10, 20, 30} years ago and ask "Was this policy a mistake?", "Did we do too much, or too little?", and so on.
  • Subsidize liquid prediction markets about the results of these surveys in all future years. For example, we can bet about people in 2045's answers to "Did we do too much or too little about climate change in 2015-2025?"
  • We will get to see market odds on what people in 10, 20, or 30 years will say about our current policy decisions. For example, people arguing against a policy can cite facts like "The market expects that in 20 years we will consider this policy to have been a mistake."

This seems particularly politically feasible; a philanthropist can unilaterally set this up for a few million dollars of surveys and prediction market subsidies. You could start by running this kind of poll a few times; then opening a prediction market on next year's poll about policy decisions from a few decades ago; then lengthening the time horizon.

(I'd personally expect this to have a larger impact on future-orientation of policy, if we imagine it getting a fraction of the public buy-in that would be required for changing voting weights.)

Comment by paul_christiano on Age-Weighted Voting · 2019-07-12T16:16:14.019Z · score: 31 (13 votes) · EA · GW
It would mitigate intertemporal inconsistency

If different generations have different views, then it seems like we'll have an same inconsistency when we shift power from one generation to the next regardless of when we do it. Under your proposal the change happens when the next generation turns 18-37, but doesn't seem to be lessened. For example, the brexit inconsistency would have been between 20 years ago and today rather than between today and 20 years from now, but it would have been just as large.

In fact I'd expect age-weighting to have more temporal inconsistency overall: in the status quo you average out idiosyncratic variation over multiple generations and swap out 1/3 of people every 20 years, while in your proposal you concentrate most power in a single generation which you completely change every 20 years.

Age and wisdom: [...] As a counterargument, crystallised intelligence increases with age and, though fluid intelligence decreases with age, it seems to me that crystallised intelligence is more important than fluid intelligence for informed voting. 

Another counterargument: older people have also seen firsthand the long-run consequences of one generation's policies and have more time to update about what sources of evidence are reliable. It's not clear to me whether this is a larger or smaller impact than "expect to live through the consequences of policies." I think folk wisdom often involves deference to elders specifically on questions about long-term consequences.

(I personally think that I'm better at picking policies at 30 than 20, and expect to be better still at 40.)

Comment by paul_christiano on Confused about AI research as a means of addressing AI risk · 2019-03-17T00:26:18.096Z · score: 6 (3 votes) · EA · GW

Consumers care somewhat about safe cars, and if safety is mostly an externality then legislators may be willing to regulate it, and there are only so many developers and if the moral case is clear enough and the costs low enough then the leaders might all make that investment.

At the other extreme, if you have no idea how to build a safe car, then there is no way that anyone is going to use a safe car no matter how much people care. Success is a combination of making safety easy and getting people to care / regulating / etc.

Here is the post I wrote about this.

If you have "competitive" solutions, then the required social coordination may be fairly mild. As a stylized example, if the leaders in the field are willing to invest in safety, then you could imagine surviving a degree of non-competitiveness in line with the size of their lead (though the situation is a bit messier than that).

Comment by paul_christiano on If slow-takeoff AGI is somewhat likely, don't give now · 2019-01-31T02:12:50.310Z · score: 16 (5 votes) · EA · GW
The current price of these companies is already determined by cutthroat competition between hyper-informed investors. If Warren Buffett or Goldman Sachs thinks the market is undervaluing these AI companies, then they'll spend billions bidding up the stock price until they're no longer undervalued.

That sounds like a nice world, but unfortunately I don't think that the market is quite that efficient. (Like the parent, I'm not going to offer any evidence, just express my view.)

You could reply, "then why ain'cha rich?" but it doesn't really work quantitatively for mispricings that would take 10+ years to correct. You could instead ask "then why ain'cha several times richer than you otherwise would be?" but lots of people are in fact several times richer than they otherwise would be after a lifetime of investment. It's not anything mind-blowing or even obvious to an external observer.

"Don't try to beat the market" still seems like a good heuristic, I just think this level of confidence in the financial system is misplaced and "hyper-informed" in particular is really overstating it. (As is "incredibly high prior" elsewhere.)

(ETA: I also agree that if you think you have a special insight about AI, there are likely to be better things to do with it.)

Comment by paul_christiano on If slow-takeoff AGI is somewhat likely, don't give now · 2019-01-31T02:05:04.328Z · score: 7 (2 votes) · EA · GW

The same neglect that potentially makes AI investments a good deal can also make AI philanthropy a better deal. If there is a huge AI boom, a prescient investment in AI companies might leave you with a larger share of the world economy---but you'll probably still be a much smaller share of total dollars directed at influencing AI.

That said, I do think this is a reasonable default thing to do with dollars if you are interested in the long term but unimpressed with the current menu of long-termist philanthropy (or expect to be better-informed in the future).

Comment by paul_christiano on Announcing an updated drawing protocol for the EffectiveAltruism.org donor lotteries · 2019-01-25T18:20:31.614Z · score: 4 (3 votes) · EA · GW

Trusting random.org doesn't seem so bad (probably a bit better than trusting IRIS, since IRIS isn't in the business of claiming to be non-manipulable). I don't know if they support arbitrary winning probabilities for draws, but probably there is some way to make it work.

(That does seem strictly worse than hashing powerball numbers though, which seem more trustworthy than random.org and easier to get.)

Comment by paul_christiano on Announcing an updated drawing protocol for the EffectiveAltruism.org donor lotteries · 2019-01-25T18:01:53.688Z · score: 2 (1 votes) · EA · GW

I'm not sure what the myriad of more responsible ways are. If you trust CEA to not mess with the lottery more than you trust IRIS not to change their earthquake reports to mess with the lottery, then just having CEA pick numbers out of a hat could be better.

It definitely seems like free-riding on some other public lottery drawing that people already trust might be better.

Comment by paul_christiano on Announcing an updated drawing protocol for the EffectiveAltruism.org donor lotteries · 2019-01-25T17:54:59.160Z · score: 3 (2 votes) · EA · GW

There is plenty of entropy in the API responses, that's not the worst concern.

I think the most serious question is whether a participant can influence the lottery draw (e.g. by getting IRIS to change low order digits of the reported latitude or longitude).

Comment by paul_christiano on How to improve EA Funds · 2018-04-14T01:39:28.025Z · score: 4 (4 votes) · EA · GW

In general I feel like donor lotteries should be preferred as a default over small donations to EA funds (winners can ultimately donate to EA funds if they decide that's the best option).

What are the best arguments in favor of EA funds as a recommendation over lotteries? Looking more normal?

(Currently there are no active lotteries, this is not a recommendation for short-term donations.)

Comment by paul_christiano on Economics, prioritisation, and pro-rich bias   · 2018-01-06T20:23:52.817Z · score: 1 (1 votes) · EA · GW

This standard of betterness is all you need to conclude: "every inefficient outcome is worse than some efficient outcome."

Comment by paul_christiano on Economics, prioritisation, and pro-rich bias   · 2018-01-06T20:21:44.898Z · score: 2 (2 votes) · EA · GW

If they endorsed the view you say they do with respect to scalping, wouldn't they say "provided there was perfectly equitable distribution of incomes, scalping ensures that goods go to those who value them most". Missing out the first bit gives an extremely misleading impression of their view, doesn't it?

When economists say "how much do you value X" they are usually using the dictionary definition of value as "estimate the monetary worth." Economists understand that valuing something involves an implicit denominator and "who values most" will depend on the choice of denominator. You get approximately the same ordering for any denominator which can be easily transferred between people, and when they say "A values X more than B" they mean in that common ordering. Economists understand that that sense of value isn't synonymous with moral value (which can't be easily transferred between people).

The reason that easily transferrable goods serve as a good denominator is because at the optimal outcome they should exactly track whatever the planner cares about (otherwise we could transfer them).

Expressing economists' actual view would take several additional sentences. The quote seems like a reasonable concise simplification.

Your version isn't true: an equitable distribution of incomes doesn't imply that everyone has roughly the same utility per marginal dollar. A closer formulation would be "Supposing that the policy-maker is roughly indifferent between giving a dollar to each person [e.g. as would be the case if the policy-maker has adopted roughly optimal policies in other domains, since dollars can be easily transferred between people] then scalping will ensure that the ticket goes to the person who the policy-maker would most prefer have it."

Immediately before your quote from Mankiw's book, he says "Equity involves normative judgments that go beyond the realm of economics and enter into the realm of political philosophy. We concentrate on efficiency as the social planner's goal. Keep in mind, however, that real policy-makers often care about equity as well." I agree the discussion is offensively simplified because it's a 101 textbook, but don't think this is evidence of fundamental confusion. If we read "equity" as "has the same marginal utility from a dollar" then this seems pretty in line with the utilitarian position.

Comment by Paul_Christiano on [deleted post] 2018-01-05T09:58:00.100Z

It's on my blog. I don't think the scheme works, and in general it seems any scheme introduces incentives to not look like a beneficiary. If I were to do this now, I would just run a prediction market on the total # of donations, have the match success level go from 50% to 100% over the spread, and use a small fraction of proceeds to place N buy and sell orders against the final book.

Comment by paul_christiano on Economics, prioritisation, and pro-rich bias   · 2018-01-03T18:11:59.212Z · score: 3 (3 votes) · EA · GW

Economists who accept your crucial premise would necessarily think that there should be no redistribution at all, since the net effect of redistribution is to move goods from people who were originally willing to pay more to people who were originally willing to pay less. But "redistribution is always morally bad" is an extreme outlier view amongst economists.

See for example the IGM poll on the minimum wage, where there is significant support for small increases to the minimum wage despite acknowledgment of the allocative inefficiency. The question most economists ask is "is this an efficient way to redistribute wealth? do the benefits justify the costs?" They don't consider the case settled because it decreases allocative efficiency (as it obviously does).

I don't think it would be that hard to find lots of examples of economists defending particular policies on the basis that those willing to pay more should get the good.

People can make that argument as part of a broader principle like "we should give goods to people who are willing to pay most, and redistribute money in the most efficient way we can."

For example, I also often argue that the people willing to pay more should get the good. But I don't accept your crucial premise even a tiny bit. The same is true of the handful of economists I've taken a class from or interacted with at length, and so I'd guess it's the most common view.

Comment by paul_christiano on Economics, prioritisation, and pro-rich bias   · 2018-01-03T18:04:02.967Z · score: 5 (5 votes) · EA · GW

Obviously what is optimal does depend on what we can compel the producer to do; if we can collect taxes, that will obviously be better. If we can compel the producer to suffer small costs to make the world better, there are better things to compel them to do. If we can create an environment in which certain behaviors are more expensive for the producer because they are socially unacceptable, there are better things to deem unacceptable. And so on.

More broadly, as a society we want to pick the most efficient ways to redistribute wealth, and as altruists we'd like to use our policy influence in the most efficient ways to redistribute wealth. Forcing the tickets to sell below market value is an incredibly inefficient way to redistribute wealth. So it can be a good idea in worlds where there are almost no options, but seems very unlikely to be a good idea in practice.

Comment by paul_christiano on Economics, prioritisation, and pro-rich bias   · 2018-01-03T09:24:04.046Z · score: 2 (2 votes) · EA · GW

In actual fact, they are appealing to preference utilitarianism. This is a moral theory.

Economists are quite often appealing to a much simpler account of betterness: if everyone prefers option A to option B, then option A is better than option B.

Comment by paul_christiano on Economics, prioritisation, and pro-rich bias   · 2018-01-03T09:13:52.706Z · score: 6 (6 votes) · EA · GW

Here is a stronger version of the pro-market-price argument:

  • The producer could sell a ticket for $1000 to Rich and then give $950 to Pete. This leaves both Rich and Pete better off, often very substantially.
  • In reality, Pete is not an optimal target for philanthropy, and so the producer could do even better by selling the ticket for $1000 to Rich and then giving to their preferred charity.
  • No matter what the producer wants, they can do better by selling the ticket at market price. And no matter what we want as advocates for a policy, we can do better by allowing them to. (In fact the world is complicated and it's not this clean, but that seems orthogonal to your objection.)

This is still not the strongest argument that can be made, but it's better than the argument from your crucial premise. I think there are few serious economists who accept your crucial premise in the way you mean it, though many might use it as a definition of welfare (but wouldn't consider total welfare synonymous with moral good).

Comment by paul_christiano on Announcing the 2017 donor lottery · 2017-12-22T04:58:47.521Z · score: 2 (2 votes) · EA · GW

What are the biggest upsides of transparency?

The actual value of the information produced seems modest.

Comment by paul_christiano on Announcing the 2017 donor lottery · 2017-12-18T06:41:14.972Z · score: 0 (0 votes) · EA · GW

You have diminishing returns to money, i.e. your utility vs. money curve is curved down. So a gamble with mean 0 has some cost to you, approximately (curvature) * (variance), that I was referring to as the cost-via-risk. This cost is approximately linear in the variance, and hence quadratic in the block size.

Comment by paul_christiano on Announcing the 2017 donor lottery · 2017-12-17T19:21:15.000Z · score: 6 (8 votes) · EA · GW

A $200k lottery has about 4x as much cost-via-risk as a $100k lottery. Realistically I think that smaller sizes (with the option to lottery up further) are significantly better than bigger pots. As the pot gets bigger you need to do more and more thinking to verify that the risk isn't an issue.

If you were OK with variable pot sizes, I think the thing to do would be:

  • The lottery will be divided up into blocks.
  • Each block will have have the same size, which will be something between $75k and $150k.
  • We provide a backstop only if the total donation is < $75k. Otherwise, we just divide the total up into chunks between $75k and $150k, aiming to be about $100k.
Comment by paul_christiano on Effective Altruism Grants project update · 2017-10-01T16:33:18.144Z · score: 2 (2 votes) · EA · GW

However, I suspect that this intuition was biased (upward), because I more often think in terms of "non-EA money". In non-EA money, CEA time would have a much higher nominal value. But if you think EA money can be used to buy good outcomes very cost-effectively (even at the margin) then $75 could make sense.

Normally people discuss the value of time by figuring out how many dollars they'd spend to save an hour. It's kind of unusual to ask how many dollars you'd have someone else spend so that you save an hour.

Comment by paul_christiano on Capitalism and Selfishness · 2017-09-16T03:17:54.292Z · score: 3 (3 votes) · EA · GW

Finally, capitalism requires a sufficiently self-interested culture such that it can sustain compounding capital accumulation through the sale of ever-greater commodities.

This is a common claim, but seems completely wrong. An economy of perfectly patient agents will accumulate capital much faster than a community that consumes 50% of its output. The patient agents will invest in infrastructure and technology and machines and so on to increase their future wealth.

The capitalists have to maximise productivity through technological innovation, wage repression, and so forth, or they are run into the ground and bankrupted by market competition

In an efficient market, the capitalists earn rents on their capital whatever they do.