Posts

Hiring engineers and researchers to help align GPT-3 2020-10-01T18:52:21.088Z
Altruistic equity allocation 2019-10-16T05:54:49.426Z
Ought: why it matters and ways to help 2019-07-26T01:56:34.037Z
Donor lottery details 2017-01-11T00:52:21.116Z
Integrity for consequentialists 2016-11-14T20:56:27.585Z
What is up with carbon dioxide and cognition? An offer 2016-04-06T01:18:03.612Z
Final Round of the Impact Purchase 2015-12-16T20:28:45.709Z
Impact purchase round 3 2015-06-16T17:16:12.858Z
Impact purchase: changes and round 2 2015-04-20T20:52:29.894Z
$10k of Experimental EA Funding 2015-02-25T19:54:29.881Z
Economic altruism 2014-12-05T00:51:44.715Z
Certificates of impact 2014-11-11T05:22:42.438Z
On Progress and Prosperity 2014-10-15T07:03:21.055Z
Three Impacts of Machine Intelligence 2013-08-23T10:10:22.937Z
The best reason to give later 2013-06-14T04:00:31.000Z
Giving now vs. later 2013-03-12T04:00:04.000Z
Risk aversion and investment (for altruists) 2013-02-28T05:00:34.000Z
Why might the future be good? 2013-02-27T05:00:49.000Z
Replaceability 2013-01-22T05:00:52.000Z

Comments

Comment by Paul_Christiano on Draft report on existential risk from power-seeking AI · 2021-05-03T19:31:49.566Z · EA · GW

A 5% probability of disaster isn't any more or less confident/extreme/radical than a 95% probability of disaster; in both cases you're sticking your neck out to make a very confident prediction.

"X happens" and "X doesn't happen" are not symmetrical once I know that X is a specific event. Most things at the level of specificity of "humans build an AI that outmaneuvers humans to permanently disempower them" just don't happen.

The reason we are even entertaining this scenario is because of a special argument that it seems very plausible. If that's all you've got---if there's no other source of evidence than the argument---then you've just got to start talking about the probability that the argument is right.

And the argument actually is a brittle and conjunctive thing. (Humans do need to be able to build such an AI by the relevant date, they do need to decide to do so, the AI they build does need to decide to disempower humans notwithstanding a prima facie incentive for humans to avoid that outcome.)

That doesn't mean this is the argument or that the argument is brittle in this way---there might be a different argument that explains in one stroke why several of these things will happen. In that case, it's going to be more productive to talk about that.

(For example, in the context of the multi-stage argument undershooting success probabilities, it's that people will be competently trying to achieve X and most of uncertainty is estimating how hard and how effectively people are trying---which is correlated across steps. So you would do better by trying to go for the throat and reason about the common cause of each success, and you will always lose if you don't see that structure.)

And of course some of those steps may really just be quite likely and one shouldn't be deterred from putting high probabilities on highly-probable things. E.g. it does seem like people have a very strong incentive to build powerful AI systems (and moreover the extrapolation suggesting that we will be able to build powerful AI systems is actually about the systems we observe in practice and already goes much of the way to suggesting that we will do so). Though I do think that the median MIRI staff-member's view is overconfident on many of these points.

Comment by Paul_Christiano on Dutch anti-trust regulator bans pro-animal welfare chicken cartel · 2021-02-25T17:41:26.548Z · EA · GW

Is your impression that if customers were willing to pay for it, then that wouldn't be sufficient cause to say that it benefited customers? (Does that mean that e.g. a standard ensuring that children's food doesn't cause discomfort also can't be protected, since it benefits customers' kids rather than customers themselves?)

Comment by Paul_Christiano on Dutch anti-trust regulator bans pro-animal welfare chicken cartel · 2021-02-24T16:37:48.881Z · EA · GW

These cases are also interesting for alignment agreements between AI labs, and it's interesting to see it playing out in practice. Cullen wrote about this here much better than I will.

Roughly speaking, if individual consumers would prefer use a riskier AI (because costs are externalized) then it seems like an agreement to make AI safer-but-more-expensive would run afoul of the same principles as this chicken-welfare agreement.

On paper, there are some reasons that the AI alignment case should be easier than the chicken-welfare case: (i) using unsafe AI hurts non-customer humans, and AI customers care more about other humans than they do about chickens, (ii) deploying unaligned AI actually likely hurts other AI customers in particular (since they will be the main ones competing with the unaligned but more sophisticated AI). So it's likely that every individual AI customer would  benefit.

Unfortunately, it seems like the same thing could be true in the chicken case---every individual customer could prefer the world with the welfare agreement---and it wouldn't change the regulator's decision.

For example, suppose that Dutch consumers eat 100 million chickens a year, 10/year for each of 10 million customers. Customer surveys discover that customers would only be willing to pay $0.01 for a chicken to have more space and a slightly longer life, but that these reforms increase chicken prices by $1. So they strike down the reform.

But with welfare standards in place, each customer pays an extra $10/year for chicken and 100 million chickens have improved lives, with a cost per chicken of less than $0.0000001/chicken, thousands of times lower than their WTP. (This is the same dynamic described here.) So every chicken consumer prefers the world where the standards are in place, despite not being willing to pay money to improve the lives of the tiny number of chickens they eat personally. This seems to be a very common reaction to discussions of animal welfare ("what difference does my consumption make? I can't change the way most chickens are treated...")

Because the number of chicken-eaters is so large, the relevant question in the survey should be "Would you prefer that someone else pay $X in order to improve chicken welfare?", making a tradeoff between two strangers. That's the relevant question for them, since the welfare standards mostly affect other people.

Analogously, if you ask AI consumers "Would you prefer have an aligned AI, or a slightly more sophisticated unaligned AI?" they could easily all say "I want the more sophisticated one," even if every single human would be better off if there were an agreement to make only aligned AI. If an anti-trust regulator used the same standard as in this case, it seems like they would throw out an alignment agreement because of that, even knowing that it would make every single human worse off.

I still think in practice AI alignment agreements would be fine for a variety of reasons. For example, I think if you ran a customer survey it's likely people would say they prefer use aligned AI even if it would disadvantage them personally because public sentiment towards AI is very different and the regulatory impulse is stronger. (Though I find it hard to believe that anything would end up hinging on such a survey, and even more strongly I think it would never come to this because there would be much less political pressure to enforce anti-trust.)

Comment by Paul_Christiano on Alternatives to donor lotteries · 2021-02-17T19:45:53.807Z · EA · GW

I guess I wouldn't recommend the donor lottery to people who wouldn't be happy entering a regular lottery for their charitable giving

Strong +1.

If I won a donor lottery, I would consider myself to have no obligation whatsoever towards the other lottery participants, and I think many other lottery participants feel the same way. So it's potentially quite bad if some participants are thinking of me  as an "allocator" of their money. To the extent there is ambiguity in the current setup, it seems important to try to eliminate that.

Comment by Paul_Christiano on [Link post] Are we approaching the singularity? · 2021-02-15T18:50:22.427Z · EA · GW
  1. I think that acceleration is autocorrelated---if things are accelerating rapidly at time T they are also more likely to be accelerating rapidly at time T+1. That's intuitively pretty likely, and it seems to show up pretty strongly in the data. Roodman makes no attempt to model it, in the interest of simplicity and analytical tractability. We are currently in a stagnant period, and so I think you should expect continuing stagnation. I'm not sure exactly how large the effect (and obviously it depends on the model) is but I think it's at least a 20-40 year delay. (There are two related angles to get a sense for the effect: one is to observe that autocorrelations seem to fade away on the timescale of a few doublings, rather than being driven by some amount of calendar time, and the other is to just look at the fact that we've had something like ~40 years of relative stagnation.)
  2. I think it's plausible that historical acceleration is driven by population growth, and that just won't really happen going forward. So at a minimum we should be uncertain betwe3en roodman's model and one that separates out population explicitly, which will tend to stagnate around the time population is limited by fertility rather than productivity.

(I agree with Max Daniel below that I don't think that Nordhaus' methodology is inherently more trustworthy. I think it's dealing with a relatively small amount of pretty short-term data, and is generally using a much more opinionated model of what technological change would look like.)

Comment by Paul_Christiano on [Link post] Are we approaching the singularity? · 2021-02-13T16:57:34.624Z · EA · GW

The relevant section is VII. Summarizing the six empirical tests:

  1. You'd expect productivity growth to accelerate as you approach the singularity, but it is slowing.
  2. The capital share should approach 100% as you approach the singularity. The share is growing, but at the slow rate of ~0.5%/year. At that rate it would take roughly 100 years to approach 100%.
  3. Capital should get very cheap as you approach the singularity. But capital costs (outside of computers) are falling relatively slowly.
  4. The total stock of capital should get large as you approach the singularity. In fact the stock of capital is slowly falling relative to output.
  5. Information should become an increasingly important part of the capital stock as you approach the singularity. This share is increasing, but will also take >100 years to become dominant.
  6. Wage grow should accelerate as you approach the singularity, but it is slowing.

I would group these into two basic classes of evidence:

  • We aren't getting much more productive, but that's what a singularity is supposed to be all about.
  • Capital and IT extrapolations are potentially compatible with a singularity, but only a timescale of 100+ years.

I'd agree that these seem like two points of evidence against singularity-soon, and I think that if I were going on outside-view economic arguments I'd probably be <50% singularity by 2100. (Though I'd still have a meaningful probability soon, and even at 100 years the prospect of a singularity would be one of the most important facts about the basic shape of the future.)

There are some more detailed aspects of the model that I don't buy, e.g. the very high share of information capital and persistent slow growth of physical capital. But I don't think they really affect the bottom line.

Comment by Paul_Christiano on Three Impacts of Machine Intelligence · 2021-02-13T02:03:15.499Z · EA · GW

If the market can't price 30-year cashflows, it can't price anything, since for any infinitely-lived asset (eg stocks!), most of the present-discounted value of future cash flows is far in the future. 

If an asset pays me far in the future,  then long-term interest rates are one factor affecting its price. But it seems to me that in most cases that factor still explains a minority of variation in prices (and because it's a slowly-varying factor it's quite hard to make money by predicting it).

For example, there is a ton of uncertainty about how much money any given company is going to make next year. We get frequent feedback signals about those predictions, and people who win bets on them immediately get returns that let them show how good they are and invest more, and so that's the kind of case where I'd be more scared of outpredicting the market.

So I guess that's saying that I expect the relative prices of stocks to be much more efficient than the absolute level.

See eg this Ralph Koijen thread and linked paper, "the first 10 years of dividends only make up ~20% of the value of the stock market. 80% is due to value of cash flows beyond 10 years"

Haven't looked at the claim but it looks kind of misleading. Dividend yield for SPY is <2% which I guess is what they are talking about? But buyback yield is a further 3%, and with a 5% yield you're getting 40% of the value in the first 10 years, which sounds more like it. So that would mean that you've gotten half of the value within 13.5 years instead of 31 years.

Technically the stock is still valued based on the future dividends, and a buyback is just decreasing outstanding shares and so increasing earnings per share. But for the purpose of pricing the stock it should make no difference whether earnings are distributed as dividends or buybacks, so the fact that buybacks push cashflows to the future can't possibly affect the difficulty of pricing stocks.

Put a different way, the value of a buyback to investors doesn't depend on the actual size of future cashflows, nor on the discount rate. Those are both cancelled out because they are factored into the price at which the company is able to buy back its shares. (E.g. if PepsiCo was making all of its earnings in the next 5 years, and ploughing them into buybacks, after which they made a steady stream of not-much-money, then PepsiCo prices would still be equal to the NPV of dividends, but the current PepsiCo price would just be an estimate of earnings over the next 5 years and would have almost no relationship to long-term interest rates.)

Even if this is right it doesn't affect your overall point too much though, since 10-20 year time horizons are practically as bad as 30-60 year time horizons.

Comment by Paul_Christiano on Three Impacts of Machine Intelligence · 2021-02-12T17:59:22.351Z · EA · GW

I I think the market just doesn't put much probability on a crazy AI boom anytime soon. If you expect such a boom then there are plenty of bets you probably want to make. (I am personally short US 30-year debt, though it's a very small part of my AI-boom portfolio.)

I think it's very hard for the market to get 30-year debt prices right because the time horizons are so long and they depend on super hard empirical questions with ~0 feedback. Prices are also determined by supply and demand across a truly huge number of traders, and making this trade locks up your money forever and can't be leveraged too much. So market forecasts are basically just a reflection of broad intellectual consensus about the future of growth (rather than views of the "smart money" or anything), and the mispricing is just a restatement of the fact that AI-boom is a contrarian position.

Comment by Paul_Christiano on AGB's Shortform · 2021-01-06T03:21:46.099Z · EA · GW

Some scattered thoughts (sorry for such a long comment!). Organized in order rather than by importance---I think the most important argument for me is the analogy to computers.

  • It's possible to write "Humanity survives the next billion years" as a conjunction of a billion events (humanity survives year 1, and year 2, and...). It's also possible to write "humanity goes extinct next year" as a conjunction of a billion events (Alice dies, and Bob dies, and...). Both of those are quite weak prima facie justifications for assigning high confidence. You could say that the second conjunction is different, because the billionth person is very likely to die once the others have died (since there has apparently been some kind of catastrophe), but the same is true for survival. In both cases there are conceivable events that would cause every term of the conjunction to be true,  and we need to address the probability of those common causes directly. Being able to write the claim as a conjunction doesn't seem to help you get to extreme probabilities without an argument about independence.
  • I feel you should be very hesitant to assign 99%+ probabilities without a good argument, and I don't think this is about anchoring to percent. The burden of proof gets stronger and stronger as you move closer to 1, and 100 is getting to be a big number. I think this is less likely to be a tractable disagreement than the other bullets but it seems worth mentioning for completeness. I'm curious if  you think there are other natural statements where the kind of heuristic you are describing (or any other similarly abstract heuristic) would justifiably get you to such high confidences. I agree with Max Daniel's point that it doesn't work for realistic versions of claims like "This coin will come up heads 30 times in a row." You say that it's not exclusive to simplified models but I think I'd be similarly skeptical of any application of this principle. (More generally, I think it's not surprising to assign very small probabilities to complex statements based on weak evidence, but that it will happen much more rarely for simple statements. It doesn't seem promising to get into that though.)
  • I think space colonization is probably possible, though getting up to probabilities like 50% for space colonization feasibility would be a much longer discussion. (I personally think >50% probability is much more reasonable than <10%.) If there is a significant probability that we colonize space, and that spreading out makes the survival of different colonists independent (as it appears it would), then it seems like we end up with some significant probability of survival. That said, I would also assign ~1/2 probability to surviving a billion years even if we were confined to Earth. I could imagine being argued down to 1/4 or even 1/8 but each successive factor of 2 seems much harder. So in some sense the disagreement isn't really about colonization.
  • Stepping back, I think the key object-level questions are something like "Is there any way to build a civilization that is very stable?" and "Will people try?" It seems to me you should have a fairly high probability on "yes" to both questions. I don't think you have to invoke super-aligned AI to justify that conclusion---it's easy to imagine organizing society in a way which drives existing extinction risks to negligible levels, and once that's done it's not clear where you'd get to 90%+ probabilities for new risks emerging that are much harder to reduce. (I'm not sure which step of this you get off the boat for---is it that you can't imagine a world that say reduced the risk of an engineered pandemic killing everyone to < 1/billion per year? Or that you think it's very likely other much harder-to-reduce risks would emerge?)
  • A lot of this is about burden of proof arguments. Is the burden of proof on someone to exhibit a risk that's very hard to reduce, or someone to argue that there exists no risk that is hard to reduce? Once we're talking about 10% or 1% probabilities it seems clear to me that the burden of proof is on the confident person. You could try to say "The claim of 'no bad risks' is a conjunction over all possible risks, so it's pretty unlikely" but I could just as well say "The claim about 'the risk is irreducible' is a conjunction over all possible reduction strategies, so it's pretty unlikely" so I don't think this gets us out of the stalemate (and the stalemate is plenty to justify uncertainty).
  • I do furthermore think that we can discuss concrete (kind of crazy) civilizations that are likely to have negligible levels of risk, given that e.g. (i) we have existence proofs for highly reliable machines over billion-year timescales, namely life, (ii) we have existence proofs for computers if you can build reliable machinery of any kind, (iii) it's easy to construct programs that appear to be morally relevant but which would manifestly keep running indefinitely.  We can't get too far with this kind of concrete argument, since any particular future we can imagine is bound to be pretty unlikely. But it's relevant to me that e.g. stable-civilization scenarios seem about as gut-level plausible to me as non-AI extinction scenarios do in the 21st century.
  • Consider the analogous question "Is it possible to build computers that successfully carry out trillions of operations without errors that corrupt the final result?" My understanding is that in the early 20th century this question was seriously debated (though that's not important to my point), and it feels very similar to your question. It's very easy for a computational error to cascade and change the final result of a computation. It's possible to take various precautions to reduce the probability of an uncorrected error, but why think that it's possible to reduce that risk to levels lower than 1 in a trillion, given that all observed computers have had fairly high error rates? Moreover, it seems that error rates are growing  as we build bigger and bigger computers, since each element has an independent failure rate, including the machinery designed to correct errors. To really settle this we need to get into engineering details, but until you've gotten into those details I think it's clearly unwise to assign very low probability to building a computer that carries out trillions of steps successfully---the space of possible designs is large and people are going to try to find one that works, so you'd need to have some good argument about why to be confident that they are going to fail.
  • You could say that computers are an exceptional example I've chosen with hindsight. But I'm left wondering if there are any valid applications of this kind of heuristic--what's the reference class of which "highly reliable computers" are exceptional rather than typical?
  • If someone said:"A billion years is a long time. Any given thing that can plausibly happen should probably be expected to happen over that time period" then I'd ask about why life survived the last billion years.
  • You could say that "a billion years" is a really long time for human civilization (given that important changes tend to happen within decades or centuries) but not a long time for intelligent life (given that important changes takes millions of years). This is similar to what happens if you appeal to current levels of extinction risk being really high. I don't buy this because life on earth is currently at a period of unprecedentedly rapid change. You should have some reasonable probability of returning to more historically typical timescales of hundreds of millions of years, which in turn gives you a reasonable overall probability on surviving for hundreds of millions of years. (Actually I think we should have >50% probabilities for reversion to lower timescales, since we can tell that the current period of rapid growth will soon be over. Over our history rapid change and rapid growth have basically coincided, so it's particularly plausible that returning to slow-growth will also return to slow-change.)
  • Applying the rule of thumb for estimating lifetimes to "the human species" rather than "intelligent life" seems like it's doing a huge amount of work. It might be reasonable to do the extrapolation using some mixture between these reference classes (and others), but in order to get extreme probabilities for extinction you'd need to have an extreme mixture. This is part of the general pattern why you don't usually end up with 99% probabilities for interesting questions without real arguments---you need to not only have a way of estimating that has very high confidence, you need to be very confident in that way of estimating.
  • You could appeal to some similar outside view to say "humanity will undergo changes similar in magnitude to those that have occurred over the last billion years;" I think that's way more plausible (though I still wouldn't believe 99%) but I don't think that it matters for claims about the expected moral value of the future.
  • The doomsday argument can plausibly arrive at very high confidences based on anthropic considerations (if you accept those anthropic principles with very high confidence). I think many long-termists would endorse the conclusion that the vast majority of observers like us do not actually live in a large and colonizable universe---not at 99.999999% but at least at 99%. Personally I would reject the inference that we probably don't live in a large universe because I reject the implicit symmetry principle. At any rate, these lines of argument go in a rather different direction than the rest of your post and I don't feel like it's what you are getting at.
Comment by Paul_Christiano on Against GDP as a metric for timelines and takeoff speeds · 2020-12-30T00:03:28.544Z · EA · GW

Scaling down all the amounts of time, here's how that situation sounds to me: US output doubles in 15 years (basically the fastest it ever has), then doubles again in 7 years. The end of the 7 year doubling is the first time that your hypothetical observer would say "OK yeah maybe we are transitioning to a new faster growth mode," and stuff started getting clearly crazy during the 7 year doubling. That scenario wouldn't be surprising to me. If that scenario sounds typical to you then it's not clear there's anything we really disagree about.

Moreover, it seems to contradict your claim that 0.14% growth was already high by historical standards.

0.14%/year growth sustained over 500 years is a doubling. If you did that between 5000BC and 1000AD then that would be 4000x growth. I think we have a lot of uncertainty about how much growth actually occurred but we're pretty sure it's not 4000x (e.g. going from 1 million people to 4 billion people). Standard kind of made-up estimates are more like 50x (e.g. those cited in Roodman's report), half that fast.

There is lots of variance in growth rates, and it would temporarily be above that level given that populations would grow way faster than that when they have enough resources. That makes it harder to tell what's going on but I think you should still be surprised to see such high growth rates sustained for many centuries.

(assuming you discount 1350 as I do as an artefact of recovering from various disasters

This doesn't seem to work, especially if you look at the UK. Just consider a long enough period of time (like 1000AD to 1500AD) to include both the disasters and the recovery. At that point, disasters should if anything decrease growth rates. Yet this period saw historically atypically fast growth.

Comment by Paul_Christiano on Against GDP as a metric for timelines and takeoff speeds · 2020-12-29T19:42:34.552Z · EA · GW

Some thoughts on the historical analogy:

If you look at the graph at the 1700 mark, GWP is seemingly on the same trend it had been on since antiquity. The industrial revolution is said to have started in 1760, and GWP growth really started to pick up steam around 1850. But by 1700 most of the Americas, the Philippines and the East Indies were directly ruled by European powers

I think European GDP was already pretty crazy by 1700. There's been a lot of recent arguing about the particular numbers and I am definitely open to just being wrong about this, but so far nothing has changed my basic picture.

After a minute of thinking my best guess for finding the most reliable time series was from the Maddison project. I pulled their dataset from here.

Here's UK population:

  • 1000AD: 2 million
  • 1500AD: 3.9 million (0.14%/year growth)
  • 1700AD: 8.6 million (0.39%)
  • 1820AD: 21.2 million (0.76%)

A 0.14%/year growth rate was already very fast by historical standards, and by 1700 things seemed really crazy.

Here's population in Spain:

  • 1000AD: 4 million
  • 1500AD: 6.8 million (0.11%)
  • 1700AD: 8.8 million (0.13%)
  • 1820AD: 12.2 million (0.28%)

The 1500-1700 acceleration is less marked here but still seems like growth was fast.

Here's the world using the data we've all been using in the past (which I think is much more uncertain):

  • 10000BC: 4 million
  • 3000BC: 14 million (0.02%)
  • 1000BC: 50 million (0.06%)
  • 1000AD: 265 million (0.08%)
  • 1500AD: 425 million (0.09%)
  • 1700AD: 610 million (0.18%)
  • 1820AD: 1 billion (0.41%)

This puts the 0.14%/year growth in the UK in context, and also suggests that things were generally blowing up by 1700AD.

I think that looking at the country-level data is probably better since it's more robust, unless your objection is "GWP isn't what matters because some countries' GDP will be growing much faster."

Comment by Paul_Christiano on Some thoughts on the EA Munich // Robin Hanson incident · 2020-10-17T17:18:33.621Z · EA · GW

I'm not sure what difference in prioritization this would imply or if we have remaining quantitative disagreements. I agree that it is bad for important institutions to become illiberal or collapse and so erosion of liberal norms is worthwhile for some people to think about. I further agree that it is bad for me or my perspective to be pushed out of important institutions (though much less bad to be pushed out of EA than out of Hollywood or academia).

It doesn't currently seem like thinking or working on this issue should be a priority for me (even within EA other people seem to have clear comparative advantage over me). I would feel differently if this was an existential issue or had a high enough impact, and I mostly dropped the conversation when it no longer seemed like that was at issue / it seemed in the quantitative reference class of other kinds of political maneuvering. I generally have a stance of just doing my thing rather than trying to play expensive political games, knowing that this will often involve losing political influence.

It does feel like your estimates for the expected harms are higher than mine, which I'm happy enough to discuss, but I'm not sure there's a big disagreement (and it would have to be quite big to change my bottom line).

I was trying to get at possible quantitative disagreements by asking things like "what's the probability that making pro-speech comments would itself be a significant political liability at some point in the future?" I think I have a probability of perhaps 2-5% on "meta-level pro-speech comments like this one eventually become a big political liability and participating in such discussions causes Paul to miss out on at least one significant opportunity to do good or have influence."

I'm always interested in useful thoughts about cost-effective things to do. I could also imagine someone making the case that "think about it more" is cost-effective for me, but I'm more skeptical of that (I expect they'd instead just actually do that thinking and tell me what they think I should do differently as a result, since the case for them thinking will likely be much better than the case for me doing it). I think your earlier comments make sense from the perspective of trying to convince other folks here to think about these issues and I didn't intend for the grandparent to be pushing against that.

For me it seems like one easy and probably-worthwhile intervention is to (mostly) behave according to a set of liberal norms that I like (and I think remain very popular) and to be willing to pay costs if some people eventually reject that behavior (confident that there will be other communities that have similar liberal norms). Being happy to talk openly about "cancel culture" is part of that easy approach, and if that led to serious negative consequences then it would be a sign that the issue is much more severe than I currently believe and it's more likely I should do something. In that case I do think it's clear there is going to be a lot of damage, though again I think we differ a bit in that I'm more scared about the health of our institutions than people like me losing influence.

Comment by Paul_Christiano on Hiring engineers and researchers to help align GPT-3 · 2020-10-09T16:37:46.088Z · EA · GW

My process was to check the "About the forum" link on the left hand side, see that there was a section on "What we discourage" that made no mention of hiring, then search for a few job ads posted on the forum and check that no disapproval was expressed in the comments of those posts.

Comment by Paul_Christiano on Hiring engineers and researchers to help align GPT-3 · 2020-10-05T19:43:28.252Z · EA · GW

I think that a scaled up version of GPT-3 can be directly applied to problems like "Here's a situation. Here's the desired result. What action will achieve that result?" (E.g. you can already use it to get answers like "What copy will get the user to subscribe to our newsletter?" and we can improve performance by fine-tuning on data about actual customer behavior or by combining GPT-3 with very simple search algorithms.)

I think that if GPT-3 was more powerful then many people would apply it to problems like that. I'm concerned that such systems will then be much better at steering the future than humans are, and that none of these systems will be actually trying to help people get what they want.

A bunch of people have written about this scenario and whether/how it could be risky. I wish that I had better writing to refer people to. Here's a post I wrote last year to try to communicate what I'm concerned about.

Comment by Paul_Christiano on Hiring engineers and researchers to help align GPT-3 · 2020-10-05T19:34:57.666Z · EA · GW

Hires would need to be able to move to the US.

Comment by Paul_Christiano on Hiring engineers and researchers to help align GPT-3 · 2020-10-05T19:34:29.258Z · EA · GW

No, I'm talking somewhat narrowly about intent alignment, i.e. ensuring that our AI system is "trying" to do what we want. We are a relatively focused technical team, and a minority of the organization's investment in safety and preparedness.

The policy team works on identifying misuses and developing countermeasures, and the applied team thinks about those issues as they arise today.

Comment by Paul_Christiano on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-26T01:59:48.704Z · EA · GW
The conclusion I draw from this is that many EAs are probably worried about CC but are afraid to talk about it publicly because in CC you can get canceled for talking about CC, except of course to claim that it doesn't exist. (Maybe they won't be canceled right away, but it will make them targets when cancel culture gets stronger in the future.) I believe that the social dynamics leading to development of CC do not depend on the balance of opinions favoring CC, and only require that those who are against it are afraid to speak up honestly and publicly (c.f. "preference falsification"). That seems to already be the situation today.

It seems possible to me that many institutions (e.g. EA orgs, academic fields, big employers, all manner of random FB groups...) will become increasingly hostile to speech or (less likely) that they will collapse altogether.

That does seem important. I mostly don't think about this issue because it's not my wheelhouse (and lots of people talk about it already). Overall my attitude towards it is pretty similar to other hypotheses about institutional decline. I think people at EA orgs have way more reasons to think about this issue than I do, but it may be difficult for them to do so productively.

If someone convinced me to get more pessimistic about "cancel culture" then I'd definitely think about it more. I'd be interested in concrete forecasts if you have any. For example, what's the probability that making pro-speech comments would itself be a significant political liability at some point in the future? Will there be a time when a comment like this one would be a problem?

Looking beyond the health of existing institutions, it seems like most people I interact with are still quite liberal about speech, including a majority of people who I'd want to work with, socialize with, or take funding from. So hopefully the endgame boils down to freedom of association. Some people will run a strategy like "Censure those who don't censure others for not censuring others for problematic speech" and take that to its extreme, but the rest of the world will get along fine without them and it's not clear to me that the anti-speech minority has anything to do other than exclude people they dislike (e.g. it doesn't look like they will win elections).

in CC you can get canceled for talking about CC, except of course to claim that it doesn't exist. (Maybe they won't be canceled right away, but it will make them targets when cancel culture gets stronger in the future.)

I don't feel that way. I think that "exclude people who talk openly about the conditions under which we exclude people" is a deeply pernicious norm and I'm happy to keep blithely violating it. If a group excludes me for doing so, then I think it's a good sign that the time had come to jump ship anyway. (Similarly if there was pressure for me to enforce a norm I disagreed with strongly.)

I'm generally supportive of pro-speech arguments and efforts and I was glad to see the Harper's letter. If this is eventually considered cause for exclusion from some communities and institutions then I think enough people will be on the pro-speech side that it will be fine for all of us.

I generally try to state my mind if I believe it's important, don't talk about toxic topics that are unimportant, and am open about the fact that there are plenty of topics I avoid. If eventually there are important topics that I feel I can't discuss in public then my intention is to discuss them.

I would only intend to join an internet discussion about "cancellation" in particularly extreme cases (whether in terms of who is being canceled, severe object-level consequences of the cancellation, or the coercive rather than plausibly-freedom-of-association nature of the cancellation).

Comment by Paul_Christiano on Does Economic History Point Toward a Singularity? · 2020-09-09T20:03:38.530Z · EA · GW

Thanks, super helpful.

(I don't really buy an overall take like "It seems unlikely" but it doesn't feel that mysterious to me where the difference in take comes from. From the super zoomed out perspective 1200 AD is just yesterday from 1700AD, it seems like random fluctuations over 500 years are super normal and so my money would still be on "in 500 years there's a good chance that China would have again been innovating and growing rapidly, and if not then in another 500 years it's reasonably likely..." It makes sense to describe that situation as "nowhere close to IR" though. And it does sound like the super fast growth is a blip.)

Comment by Paul_Christiano on Does Economic History Point Toward a Singularity? · 2020-09-09T14:56:59.169Z · EA · GW

I took numbers from Wikipedia but have seen different numbers that seem to tell the same story although their quantitative estimates disagree a ton.

The first two numbers are all higher than growth rates could have plausibly been in a sustained way during any previous part of history (and the 0-1000AD one probably is as well), and they seem to be accelerating rather than returning to a lower mean (as must have happened during any historical period of similar growth).

My current view is that China was also historically unprecedented at that time and probably would have had an IR shortly after Europe. I totally agree that there is going to be some mechanistic explanation for why europe caught up with and then overtook china, but from the perspective of the kind of modeling we are discussing I feel super comfortable calling it noise (and expecting similar "random" fluctuations going forward that also have super messy contingent explanations).

Comment by Paul_Christiano on Does Economic History Point Toward a Singularity? · 2020-09-09T14:47:20.702Z · EA · GW

If one believed the numbers on wikipedia, it seems like Chinese growth was also accelerating a ton and it was not really far behind on the IR, such that I wouldn't except to be able to easily eyeball the differences.

If you are trying to model things at the level that Roodman or I are, the difference between 1400 and 1600 just isn't a big deal, the noise terms are on the order of 500 years at that point.

So maybe the interesting question is if and why scholars think that China wouldn't have had an IR shortly after Europe (i.e. within a few centuries, a gap small enough that it feels like you'd have to have an incredibly precise model to be justifiably super surprised).

Maybe particularly relevant: is the claimed population growth from 1700-1800 just catch-up growth to Europe? (more than doubling in 100 years! And over the surrounding time period the observed growth seems very rapid even if there are moderate errors in the numbers) If it is, how does that work given claims that Europe wasn't so far ahead by 1700? If it isn't, then how does the that not very strongly suggest incredible acceleration in China, given that it had very recently had some of the fastest growth in history and is then experience even more unprecedented growth? Is it a sequence of measurement problems that just happen to suggest acceleration?

Comment by Paul_Christiano on Does Economic History Point Toward a Singularity? · 2020-09-08T15:37:53.495Z · EA · GW
My model is that most industries start with fast s-curve like growth, then plateau, then often decline

I don't know exactly what this means, but it seems like most industries in the modern world are characterized by relatively continuous productivity improvements over periods of decades or centuries. The obvious examples to me are semiconductors and AI since I deal most with those. But it also seems true of e.g. manufacturing, agricultural productivity, batteries, construction costs. It seems like industries where the productivity vs time curve is a "fast S-curve" are exceptional, which I assume means we are somehow reading the same data differently. What kind of industries would you characterize this way?

(I agree that e.g. "adoption" is more likely to be an s-curve given that it's bounded, but productivity seems like the analogy for growth rates.)

Comment by Paul_Christiano on Does Economic History Point Toward a Singularity? · 2020-09-08T15:24:51.649Z · EA · GW

It feels like you are drawing some distinction between "contingent and complicated" and "noise." Here are some possible distinctions that seem relevant to me but don't actually seem like disagreements between us:

  • If something is contingent and complicated, you can expect to learn about it with more reasoning/evidence, whereas if it's noise maybe you should just throw up your hands. Evidently I'm in the "learn about it by reasoning" category since I spend a bunch of time thinking about AI forecasting.
  • If something is contingent and complicated, you shouldn't count on e.g. the long-run statistics matching the noise distribution---there are unmodeled correlations (both real and subjective). I agree with this and think that e.g. the singularity date distributions (and singularity probability) you get out of Roodman's model are not trustworthy in light of that (as does Roodman).

So it's not super clear there's a non-aesthetic difference here.

If I was saying "Growth models imply a very high probability of takeoff soon" then I can see why your doc would affect my forecasts. But where I'm at from historical extrapolations is more like "maybe, maybe not"; it doesn't feel like any of this should change that bottom line (and it's not clear how it would change that bottom line) even if I changed my mind everywhere that we disagree.

"Maybe, maybe not" is still a super important update from the strong "the future will be like the recent past" prior that many people implicitly have and I might otherwise take very seriously. It also leads me to mostly dismiss arguments like "this is obviously not the most important century since most aren't." But it mostly means that I'm actually looking at what is happening technologically.

You may be responding to writing like this short post where I say "We have been in a period of slowing growth for the last forty years. That’s a long time, but looking over the broad sweep of history I still think the smart money is on acceleration eventually continuing, and seeing something like [hyperbolic growth]...". I stand by the claim that this is something like the modal guess---we've had enough acceleration that the smart money is on it continuing, and this seems equally true on the revolutions model. I totally agree that any specific thing is not very likely to happen, though I think it's my subjective mode. I feel fine with that post but totally agree it's imprecise and this is what you get for being short.

The story with fossil fuels is typically that there was a pre-existing economic efflorescence that supported England's transition out of an 'organic economy.' So it's typically a sort of tipping point story, where other factors play an important role in getting the economy to the tipping point.

OK, but if those prior conditions led to a great acceleration before the purported tipping point, then I feel like that's mostly what I want to know about and forecast.

Supposing we had accurate data, it seems like the best approach is running a regression that can accommodate either hyperbolic or exponential growth — plus noise — and then seeing whether we can object the exponential hypothesis. Just noting that the growth rate must have been substantially higher than average within one particular millennium doesn’t necessarily tell us enough; there’s still the question of whether this is plausibly noise.

I don't think that's what I want to do. My question is, given a moment in history, what's the best way to guess whether and in how long there will be significant acceleration? If I'm testing the hypothesis "The amount of time before significant acceleration tends to be a small multiple of the current doubling time" then I want to look a few doublings ahead and see if things have accelerated, averaging over a doubling (etc. etc.), rather than do a regression that would indirectly test that hypothesis by making additional structural assumptions + would add a ton of sensitivity to noise.

You don’t need a story about why they changed at roughly the same time to believe that they did change at roughly the same time (i.e. over the same few century period). And my impression is that that, empirically, they did change at roughly the same time. At least, this seems to be commonly believed.
I don’t think we can reasonably assume they’re independent. Economic histories do tend to draw casual arrows between several of these differences, sometimes suggesting a sort of chain reaction, although these narrative causal diagrams are admittedly never all that satisfying; there’s still something mysterious here. On the other hand, higher population levels strike me as a fairly unsatisfying underlying cause.

It looked like you were listing those things to help explain why you have a high prior in favor of discontinuities between industrial and agricultural societies. "We don't know why those things change together discontinuously, they just do" seems super reasonable (though whether that's true is precisely what's at issue). But it does mean that listing out those factors adds nothing to the a priori argument for discontinuity.

Indeed, if you think that all of those are relevant drivers of growth rates then all else equal I'd think you'd expect more continuous progress, since all you've done is rule out one obvious way that you could have had discontinuous progress (namely by having the difference be driven by something that had a good prima facie reason to change discontinuously, as in the case of the agricultural revolution) and now you'll have to posit something mysterious to get to your discontinuous change.

Comment by Paul_Christiano on Does Economic History Point Toward a Singularity? · 2020-09-08T15:09:35.106Z · EA · GW

I think Roodman's model implies a standard deviation of around 500-1000 years for IR timing starting from 1000AD, but I haven't checked. In general for models of this type it seems like the expected time to singularity is a small multiple of the current doubling time, with noise also being on the order of the doubling time.

The model clearly underestimates correlations and hence the variance here---regardless of whether we go in for "2 revolutions" or "randomly spread out" we can all agree that a stagnant doubling is more likely to be followed by another stagnant doubling and vice versa, but the model treats them as independent.

(As one particular contingency you mention: It seems super plausible to me, especially, that if the Americas didn't turn out to exist, then the Industrial Revolution would have happened much later. But this seems like a pretty random/out-of-model fact about the world.)

This seems to suggest there are lots of civilizations like Europe-in-1700. But it seems to me that by this time (and so I believe before the Americas had any real effect) Europe's state of technological development was already pretty unprecedented. This is lot of what makes many of the claims about "here's why the IR happened" seem dubious to me.

My sense of that comes from: (i) in growth numbers people usually cite, Europe's growth was absurdly fast from 1000AD - 1700AD (though you may think those numbers are wrong enough to bring growth back to a normal level) (ii) it seems like Europe was technologically quite far ahead of other IR competitors.

I'm curious about your take. Is it that:

  • The world wasn't yet historically exceptional by 1700, there have been other comparable periods of rapid progress. (What are the historical analogies and how analogous do you think they are? Is my impression of technological sophistication wrong?)
  • 1700s Europe is quantitatively exceptional by virtue of being the furthest along example, but nevertheless there is a mystery to be explained about why it became even more exceptional rather than regressing to the mean (as historical exceptional-for-their-times civilizations had in the past). I don't currently see a mystery about this (given the level of noise in Roodman's model, which seems like it's going to be in the same ballpark as other reasonable models), but it may be because I'm not informed enough about those historical analogies.
  • Actually the IR may have been inevitable in 1700s Europe but the exact pace seems contingent. (This doesn't seem like a real tension with a continuous acceleration model.)
  • Actually the contingencies you have in mind were already driving the exceptional situation in 1700.
Comment by Paul_Christiano on Does Economic History Point Toward a Singularity? · 2020-09-07T23:01:09.249Z · EA · GW
I think that Hanson's "series of 3 exponentials" is the neatest alternative, although I also think it's possible that pre-modern growth looked pretty different from clean exponentials (even on average / beneath the noise). There's also a semi-common narrative in which the two previous periods exhibited (on average) declining growth rates, until there was some 'breakthrough' that allowed the growth rate to surge: I suppose this would be a "three s-curve" model. Then there's the possibility that the growth pattern in each previous era was basically a hard-to-characterize mess, but was constrained by a rough upper bound on the maximum achievable growth rate. This last possibility is the one I personally find most likely, of the non-hyperbolic possibilities.

It seems almost guaranteed that the data is a mess, it just seems like the only difference between the perspectives is "is acceleration fundamentally concentrated into big revolutions or is it just random and we can draw boundaries around periods of high-growth and call those revolutions?"

There may also be some fundamental meta-prior that matters, here, about the relative weight one ought to give to simple unified models vs. complex qualitative/multifactoral stories.

Which growth model corresponds to which perspective? I normally think of "'industry' is what changed and is not contiguous with what came before" as the single-factor model, and multifactor growth models tending more towards continuous growth.

A lot of my prior comes down to my impression that the dynamics of growth just *seem * very different to me for forager societies, agricultural/organic, and industrial/fossil-fuel societies.

I'm definitely much more sympathetic to the forager vs agricultural distinction.

Does a discontinuous change from fossil-fuel use even fit the data? It doesn't seem to add up at all to me (e.g. doesn't match the timing of acceleration, there are lots of industries that seemed to accelerate without reliance on fossil fuels, etc.), but would only consider a deep dive if someone actually wanted to stake something on that.

I don’t think the post-1500 data is too helpful help for distinguishing between the ‘long run trend’ and ‘few hundred year phase transition’ perspectives.
If there was something like a phase transition, from pre-modern agricultural societies to modern industrial societies, I don’t see any particular reason to expect the growth curve during the transition to look like the sum of two exponentials. (I especially don’t expect this at the global level, since diffusion dynamics are so messy.)

It feels to me like I'm saying: acceleration happens kind of randomly on a timescale roughly determined by the current growth rate. We should use the base rate of acceleration to make forecasts about the future, i.e. have a significant probability of acceleration during each doubling of output. (Though obviously the real model is more complicated and we can start deviating from that baseline, e.g. sure looks like we should have a higher probability of stagnation now given that we'e had decades of it.)

It feels to me like you are saying "No, we can have a richer model of historical acceleration that assigns significantly lower probability to rapid acceleration over the coming decades / singularity."

So to me it feels like as we add random stuff like "yeah there are revolutions but we don't have any prediction about what they will look like" makes the richer model less compelling. It moves me more towards the ignorant perspective of "sometimes acceleration happens, maybe it will happen soon?", which is what you get in the limit of adding infinitely many ex ante unknown bells and whistles to your model.

The papers typically suggest that the thing kicking off the growth surge, within a particular millennium, is the beginning of intensive agriculture in that region — so I don’t think the pivotal triggering event is really different.

Is "intensive agriculture" a well-defined thing? (Not rhetorical.) It didn't look like "the beginning of intensive agriculture" corresponds to any fixed technological/social/environmental event (e.g. in most cases there was earlier agriculture and no story was given about why this particular moment would be the moment), it just looked like it was drawn based on when output started rising faster.

I wouldn't necessarily say they were significantly faster. It depends a bit on exactly how you run this test, but, when I run a regression for "(dP/dt)/P = a*P^b" (where P is population) on the dataset up until 1700AD, I find that the b parameter is not significantly greater than 0. (The confidence interval is roughly -.2 to .5, with zero corresponding to exponential growth.)

I mean that if you have 5x growth from 0AD to 1700AD, and growth was at the same rate from 10000BC to 0AD, then you would expect 5^(10,000/1700) = 13,000-fold growth over that period. We have uncertainty about exactly how much growth there was in the prior period, but we don't have anywhere near that much uncertainty.

Doing a regression on yearly growth rates seems like a bad way to approach this. It seems like the key question is: did growth speed up a lot in between the agricultural and industrial revolutions? It seems like the way to pick that is to try to use points that are as spaced out as possible to compare growth rates in the beginning and late part of the interval from 10000BC to 1500AD. (The industrial revolution is usually marked much later, but for the purpose of the "2 revolutions" view I think you definitely need it to start by then.)

So almost all of the important measurement error is going to be in the bit of growth in the 0AD to 1500AD phase. If in fact there was only 2x growth in that period (say because the 0AD number was off by 50%) then that would only predict 100-fold growth from 10,000BC to 0AD, which is way more plausible.

The industrial era is, in comparison, less obviously different from the farming era, but it also seems pretty different. My list of pretty distinct features of pre-modern agricultural economies is: (a) the agricultural sector constituted the majority of the economy; (b) production and (to a large extent) transportation were limited by the availability of agricultural or otherwise ‘organic’ sources of energy (plants to power muscles and produce fertiliser); (c) transportation and information transmission speeds were largely limited by windspeed and the speed of animals; (d) nearly everyone was uneducated, poor, and largely unfree; (e) many modern financial, legal, and political institutions did not exist; (f) certain cultural attitudes (such as hatred of commerce and lack of belief in the possibility of progress) were much more common; and (g) scientifically-minded research and development projects played virtually no role in the growth process.

If you just keep listing things, it stops being a plausible source of a discontinuity---you then need to give some story for why your 7 factors all change at the same time. If they don't, e.g. if they just vary randomly, then you are going to get back to continuous change.

Comment by Paul_Christiano on Does Economic History Point Toward a Singularity? · 2020-09-07T20:31:05.913Z · EA · GW
because I have a bunch of very concrete, reasonably compelling sounding stories of specific things that caused the relevant shifts

Be careful that you don't have too many stories, or it starts to get continuous again.

More seriously, I don't know what the small # of factors are for the industrial revolution, and my current sense is that the story can only seem simple for the agricultural revolution because we are so far away and ignoring almost all the details.

It seems like the only factor that looks a priori like it should cause a discontinuity is the transition from hunting+gathering to farming, i.e. if you imagine "total food" as the sum of "food we make" and "food we find" then there could be a discontinuous change in growth rates as "food we make" starts to become large relative to "food we find" (which bounces around randomly but is maybe not really changing). This is blurred because of complementarity between your technology and finding food, but certainly I'm on board with an in-principle argument for a discontinuity as the new mode overtakes the old one.

For the last 10k years my impression is that no one has a very compelling story for discontinuities (put differently: they have way too many stories) and it's mostly a stylized empirical fact that the IR is kind of discontinuous. But I'm provisionally on board with Ben's basic point that we don't really have good enough data to know whether growth had been accelerating a bunch in the run-up to the IR.

To the extent things are discontinuous, I'd guess that it's basically from something similar to the agricultural case---there is continuous growth and random variation, and you see "discontinuities" in the aggregate if a smaller group is significantly outpacing the world, so that by the time they become a large part of the world they are growing significantly faster.

I think this is also reasonably plausible in the AI case (e.g. there is an automated part of the economy doubling every 1-2 years, by the time it gets to be 10% of the economy it's driving +5%/year growth, 1-2 years later it's driving +10% growth). But I think quantitatively given the numbers involved and the actual degree of complementarity, this is still unlikely to give you a fast takeoff as I operationalized it. I think if we're having a serious discussion about "takeoff" that's probably where the action is, not in any of the kinds of arguments that I dismiss in that post.

I find the "but X has fewer parameters" argument only mildly compelling, because I feel like other evidence about similar systems that we've observed should easily give us enough evidence to overcome the difference in complexity. 

I mean something much more basic. If you have more parameters then you need to have uncertainty about every parameter. So you can't just look at how well the best "3 exponentials" hypothesis fits the data, you need to adjust for the fact that this particular "3 exponentials" model has lower prior probability. That is, even if you thought "3 exponentials" was a priori equally likely to a model with fewer parameters, every particular instance of 3 exponentials needs to be less probable than every particular model with fewer parameters.

The thing that on the margin would feel most compelling to me for the continuous view is something like a concrete zoomed in story of how you get continuous growth from a bunch of humans talking to each other and working with each other over a few generations, that doesn't immediately abstract things away into high-level concepts like "knowledge" and "capital". 

As far as I can tell this is how basically all industries (and scientific domains) work---people learn by doing and talk to each other and they get continuously better, mostly by using and then improving on technologies inherited from other people.

It's not clear to me whether you are drawing a distinction between modern economic activity and historical cultural accumulation, or whether you feel like you need to see a zoomed-in version of this story for modern economic activity as well, or whether this is a more subtle point about continuous technological progress vs continuous changes in the rate of tech progress, or something else.

Comment by Paul_Christiano on Does Economic History Point Toward a Singularity? · 2020-09-07T16:27:31.906Z · EA · GW

This would be an important update for me, so I'm excited to see people looking into it and to spend more time thinking about it myself.

High-level summary of my current take on your document:

  • I agree that the 1AD-1500AD population data seems super noisy.
  • Removing that data removes one of the datapoints supporting continuous acceleration (the acceleration between 10kBC - 1AD and 1AD-1500AD) and should make us more uncertain in general.
  • It doesn't have much net effect on my attitude towards continuous acceleration vs discontinuous jumps, this mostly pushes us back towards our prior.
  • I'm not very moved by the other evidence/arguments in your doc.

Here's how I would summarize the evidence in your document:

  • Much historical data is made up (often informed by the author's models of population dynamics), so we can't use it to estimate historical growth. This seems like the key point.
  • In particular, although standard estimates of growth from 1AD to 1500AD are significantly faster than growth between 10kBC and 1AD, those estimates are sensitive to factor-of-1.5 error in estimates of 1AD population, and real errors could easily be much larger than that.
  • Population levels are very noisy (in addition to population measurement being noisy) making it even harder to estimate rates.
  • Radiographic data often displays isolated periods of rapid growth from 10,000BC to 1AD and it's possible that average growth rates were something like 2000 year doubling. So even if 500-2000 year doubling times are accurate from 1AD to 1500, those may not be a deviation from the preceding period.
  • You haven't looked into the claims people have made about growth from 100kya to 10kya, but given what we know about measurement error from 10kya to now, it seems like the 100kya-10kya data is likely to be way too noisy to say anything about.

Here's my take in more detail:

  • You are basically comparing "Series of 3 exponentials" to a hyperbolic growth model. I think our default simple hyperbolic growth model should be the one in David Roodman's report (blog post), so I'm going to think about this argument as comparing Roodman's model to a series of 3 noisy exponentials. In your doc you often dunk on an extremely low-noise version of hyperbolic growth but I'm mostly ignoring that because I absolutely agree that population dynamics are very noisy.
  • It feels like you think 3 exponentials is the higher prior model. But this model has many more parameters to fit the data, and even ignoring that "X changes in 2 discontinuous jumps" doesn't seem like it has a higher prior than "X goes up continuously but stochastically." I think the only reason we are taking 3 exponentials seriously is because of the same kind of guesswork you are dismissive of, namely that people have a folk sense that the industrial revolution and agricultural revolutions were discrete changes. If we think those folk senses are unreliable, I think that continuous acceleration has the better prior. And at the very least we need to be careful about using all the extra parameters in the 3-exponentials model, since a model with 2x more parameters should fit the data much better.
  • On top of that, the post-1500 data is fit terribly by the "3 exponentials" model. Given that continuous acceleration very clearly applies in the only regime where we have data you consider reliable, and given that it already seemed simpler and more motivated, it seems pretty clear to me that it should have the higher prior, and the only reason to doubt that is because of growth folklore. You can't have it both ways in using growth folklore to promote this hypothesis to attention and then dismissing the evidence from growth folklore because it's folklore.
  • On the acceleration model, the periods from 1500-2000, 10kBC-1500, and "the beginning of history to 10kBC" are roughly equally important data (and if that hypothesis has higher prior I don't think you can reject that framing). Changes within 10kBC - 1500 are maybe 1/6th of the evidence, and 1/3 of the relevant evidence for comparing "continuous acceleration" to "3 exponentials." I still think it's great to dig into one of these periods, but I don't think it's misleading to present this period as only 1/3 of the data on a graph.
  • (Enough about priors, onto the data.)
  • I think that the key claim is that the 1AD-1500AD data is mostly unreliable. Without this data, we have very little information about acceleration from 10kBC - 1500AD, since the main thing we actually knew was that 1AD-1500AD must have been faster than the preceding 10k years. I'd like to look into that more, but it looks super plausible to me that the noise is 2x or more for 1AD which is enough to totally kill any inference about growth rates. So provisionally I'm inclined to accept your view there.
  • That basically removes 1 datapoint for the continuous acceleration story and I totally agree it should leave us more uncertain about what's going on. That said, throwing out all the numbers from that period also removes one of the main quantitative datapoints against continuous acceleration [ETA: the other big one being the modern "great stagnation," both of these are in the tails of the continuous acceleration story and are just in the middle of the constant exponentials in the 3-exponential story, though see Robin Hanson's writeup to get a sense for what the series of exponentials view actually ends up looking like---it's still surprised by the great stagnation], and comes much closer to leaving us with our priors + the obvious acceleration over longer periods + the obvious acceleration during the shorter period where we actually have data, which seem to all basically point in the same direction.
  • Even taking the radiocarbon data as given I don't agree with the conclusions you are drawing from that data. It feels like in each case you are saying "a 2-exponential model fits fine" but the 2 exponentials are always different. The actual events (either technological developments or climate change or population dynamics) that are being pointed to as pivotal aren't the same across the different time series and so I think we should just be analyzing these without reference to those events (no suggestive dotted lines :) ). I spent some time doing this kind of curve fitting to various stochastic growth models and this basically looks to me like what individual realizations look like from such models--the extra parameters in "splice together two unrelated curves" let you get fine-looking fits even when we know that the underlying dynamics are continuous+stochastic.
  • I currently don't trust the population data coming from the radiocarbon dating. My current expectation is that after a deep dive I would not end up trusting the radiocarbon dating at all for tracking changes in the rate of population growth when the populations in question are changing how they live and what kinds of artifacts they make (from my perspective, that's what happened with the genetics data, which wasn't caveated so aggressively in the initial draft I reviewed). I'd love to hear from someone who actually knows about these techniques or has done a deep dive on these papers though.
  • I think the only dataset that you should expect to provide evidence on its own is the China population time series. But even there if you just take rolling averages and allow for a reasonable level of noise I think the continuous acceleration story looks fine. E.g. I think if you compare David Roodman's model with the piecewise exponential model (both augmented with measurement noise, and allowing you to choose noisy dynamics however you want for the exponential model), Roodman's model is going to fit the data better despite having fewer free parameters. If that's the case, I don't think this time series can be construed as evidence against that model.
  • I agree with the point that if growth is 0 before the agricultural revolution, rather than "small," then that would undermine the continuous acceleration story. I think prior growth was probably slow but non-zero, and this document didn't really update my view on that question.
Comment by Paul_Christiano on How Much Leverage Should Altruists Use? · 2020-05-16T16:30:18.425Z · EA · GW
This is only 2.4 standard deviations assuming returns follow a normal distribution, which they don't.

No, 2.4 standard deviations is 2.4 standard deviations.

It's possible to have distributions for which what's more or less surprising.

For a normal distribution, this happens about one every 200 periods. I totally agree that this isn't a factor of 200 evidence against your view. So maybe saying "falsifies" was too strong.

But no distribution is 2.35 standard deviations below its mean with probability more than 18%. That's literally impossible. And no distribution is 4 standard deviations below its mean with probability >6%. (I'm just adopting your variance estimates here, so I don't think you can really object.)

This is not directly relevant to the investment strategies I talked about above, but if you use the really simple (and well-supported) expected return model of earnings growth plus dividends plus P/E mean reversion and plug in the current numbers for emerging markets, you get 9-11% real return (Research Affiliates gives 9%, I've seen other sources give 11%). This is not a highly concentrated investment of 50 stocks—it's an entire asset class. So I don't think expecting a 9% return is insane.

Have you looked at backtests of this kind of reasoning for emerging markets? Not of total return, I agree that is super noisy, but just the basic return model? I was briefly very optimistic about EM when I started investing, based on arguments like this one, but then when I looked at the data it just seems like it doesn't work out, and there are tons of ways that emerging market companies could be less appealing for investors that could explain a failure of the model. So I ended up just following the market portfolio, and using much more pessimistic returns estimates.

I didn't look into it super deeply. Here's some even more superficial discussion using numbers I pulled while writing this comment.

Over the decade before this crisis, it seems like EM earnings yields were roughly flat around 8%. Dividend yield was <2%. Real dividends were basically flat. Real price return was slightly negative. And I think on top of all of that the volatility was significantly higher than US markets.

Why expect P/E mean reversion to rescue future returns in this case? It seems like EM companies have lots of on-paper earnings, but they neither distribute those to investors (whether as buybacks or dividends) nor use them to grow future earnings. So their current P/E ratios seem justified, and expecting +5%/year returns from P/E mean reversion seems pretty optimistic.

Like I said, I haven't looked into this deeply, so I'm totally open to someone pointing out that actually the naive return model has worked OK in emerging markets after correcting for some important non-obvious stuff (or even just walking through the above analysis more carefully), and so we should just take the last 10 years of underperformance as evidence that now is a particularly good time to get in. But right now that's not my best guess, much less strongly supported enough that I want to take a big anti-EMH position on it (not to mention that betting against beta is one of the factors that seems most plausible to me and seems best documented, and EM is on the other side of that trade).

which explain why the authors believe their particular implementations of momentum and value have (slightly) better expected return.

I'm willing to believe that, though I'm skeptical that they get enough to pay for their +2% fees.

I don't overly trust backtests, but I trust the process behind VMOT, which is (part of the) reason to believe the cited backtest is reflective of the strategy's long-term performance.[2] VMOT projected returns were based on a 20-year backtest, but you can find similar numbers by looking at much longer data series

The markets today are a lot different from the markets 20 years ago. The problem isn't just that the backtests are typically underpowered, it's that markets become more sophisticated, and everyone gets to see that data. You write:

RAFI believes the value and momentum premia will work as well in the future as they have in the past, and some of the papers I linked above make similar claims. They offer good support for this claim, but in the interest of conservatism, we could justifiably subtract a couple of percentage points from expected return to account for premium degradation.

Having a good argument is one thing---I haven't seen one but also haven't looked that hard, and I'm totally willing to believe that one exists and I think it's reasonable to invest on the basis of such arguments. I also believe that premia won't completely dry up because smart investors won't want the extra volatility if the returns aren't there (and lots of people chasing a premium will add premium-specific volatility).

But without a good argument, subtracting a few percentage points from backtested return isn't conservative. That's probably what you should do with a good argument.

Comment by Paul_Christiano on How Much Leverage Should Altruists Use? · 2020-04-25T03:07:22.176Z · EA · GW

I haven't done a deep dive on this but I think futures are better than this analysis makes them look.

Suppose that I'm in the top bracket and pay 23% taxes on futures, and that my ideal position is 2x SPY.

In a tax-free account I could buy SPY and 1x SPY futures, to get (2x SPY - 1x interest).

In a taxable account I can buy 1x SPY and 1.3x SPY futures. Then my after-tax expected return is again (2x SPY - 1x interest).

The catch is that if I lose money, some of my wealth will take the form of taxable losses that I can use to offset gains in future years. This has a small problem and a bigger problem:

  • Small problem: it may be some years before I can use up those taxable losses. So I'll effectively pay interest on the money over those years. If real rates were 2% and I had to wait 5 years on average to return to my high-water mark, then this would be an effective tax rate of (2% * 5 years) * (23%) ~ 2.3%. I think that's conservative, and this is mostly negligible.
  • Large problem: if the market goes down enough, I could be left totally broke, and my taxable losses won't do me any good. In particular, if the market went down 52%, then my 2x leveraged portfolio should be down to around 23% of my original net worth, but that will entirely be in the form of taxable losses (losing $100 is like getting a $23 grant, to be redeemed only once I've made enough taxable gains).

So I can't just treat my taxable losses as wealth for the purpose of computing leverage. I don't know exactly what the right strategy is, it's probably quite complicated.

The simplest solution is to just ignore them when setting my desired level of leverage. If you do that, and are careful about rebalancing, it seems like you shouldn't lose very much to taxes in log-expectation (e.g. if the market is down 50%, I think you'd end up with about half of your desired leverage, which is similar to a 25% tax rate). But I'd like to work it out, since other than this futures seem appealing.

Comment by Paul_Christiano on How Much Leverage Should Altruists Use? · 2020-04-23T21:23:24.989Z · EA · GW

I'm surprised by (and suspicious of) the claim about so many more international shares being non-tradeable, but it would change my view.

I would guess the savings rate thing is relatively small compared to the fact that a much larger fraction of US GDP is inevestable in the stock market---the US is 20-25% of GDP, but the US is 40% of total stock market capitalization and I think US corporate profits are also ballpark 40% of all publicly traded corporate profits. So if everyone saved the same amount and invested in their home country, US equities would be too cheap.

I agree that under EMH the two bonds A and B are basically the same, so it's neutral. But it's a prima facie reason that A is going to perform worse (not a prima facie reason it will perform better) and it's now pretty murky whether the market is going to err one way or the other.

I'm still pretty skeptical of US equities outperforming, but I'll think about it more.

I haven't thought about the diversification point that much. I don't think that you can just use the empirical daily correlations for the purpose of estimating this, but maybe you can (until you observe them coming apart). It's hard to see how you can be so uncertain about the relative performance of A and B, but still think they are virtually perfectly correlated (but again, that may just be a misleading intuition). I'm going to spend a bit of time with historical data to get a feel for this sometime and will postpone judgment until after doing that.

Comment by Paul_Christiano on How Much Leverage Should Altruists Use? · 2020-04-23T21:12:58.317Z · EA · GW

I also like GMP, and find the paper kind of surprising. I checked the endpoints stuff a bit and it seems like it can explain a small effect but not a huge one. My best guess is that going from equities to GMP is worth like +1-2% risk-free returns.

Comment by Paul_Christiano on How Much Leverage Should Altruists Use? · 2020-04-22T23:53:08.039Z · EA · GW

I like the basic point about leverage and think it's quite robust.

But I think the projected returns for VMOT+MF are insane. And as a result the 8x leverage recommendation is insane, someone who does that is definitely just going to go broke. (This is similar to Carl's complaint.)

My biggest problem with this estimate is that it kind of sounds crazy and I don't know very good evidence in favor. But it seems like these claimed returns are so high that you can also basically falsify them by looking at the data between when VMOT was founded and when you wrote this post.

VMOT is down 20% in the last 3 years. This estimate would expect returns of 27% +- 20% over that period, so you're like 2.4 standard deviations down.

When you wrote this post, before the crisis, VMOT was only like 1.4 standard deviations below your expectations. so maybe we should be more charitable?

But that's just because it was a period of surprisingly high market returns. VMOT lagged VT by more than 35% between its inception and when you wrote this post, whereas this methodology expects it to outperform by more than 12% over that period. VMOT/VT are positively correlated, and based on your numbers it looks like the stdev of excess performance should be <10%. So that's like 4-5 standard deviations of surprising bad performance already.

Is something wrong with this analysis?

If that's right, I definitely object to the methodology "take an absurd backtest that we've already falsified out of sample, then cut a few percentage points off and call it conservative." In this case it looks like even the "conservative" estimate is basically falsified.

Comment by Paul_Christiano on How Much Leverage Should Altruists Use? · 2020-04-22T23:22:19.139Z · EA · GW
We could account for this by treating mean return and standard deviation as distributions rather than point estimates, and calculating utility-maximizing leverage across the distribution instead of at a single point. This raises a further concern that we don’t even know what distribution the mean and standard deviation have, but at least this gets us closer to an accurate model.

Why not just take the actual mean and standard deviation, averaging across the whole distribution of models?

What exactly is the "mean" you are quoting, if it's not your subjective expectation of returns?

(Also, I think the costs of choosing leverage wrong are pretty symmetric.)

Comment by Paul_Christiano on How Much Leverage Should Altruists Use? · 2020-04-22T23:19:39.640Z · EA · GW

My understanding is that the sharpe ratio of the global portfolio is quite similar to the equity portfolio (e.g. see here for data on the period from 1960-2017, finding 0.36 for the global market and 0.37 for equities).

I still do expect the broad market to outperform equities alone, but I don't know where the super-high estimates for the benefits of diversification are coming from, and I expect the effect to be much more modest then the one described in the linked post by Ben Todd. Do you know what's up with the discrepancy? It could be about choice of time periods or some technical detail, but it's kind fo a big discrepancy. (My best guess is an error in the linked post.)

Comment by Paul_Christiano on How Much Leverage Should Altruists Use? · 2020-04-22T23:09:35.879Z · EA · GW
To use leverage, you will probably end up having to pay about 1% on top of short-term interest rates

Not a huge deal, but it seems like the typical overhead is about 0.3%:

  • This seems to be the implicit rate I pay if I buy equity futures rather than holding physical equities (a historical survey: http://cdar.berkeley.edu/wp-content/uploads/2016/12/futures-gunther-etal-111616.pdf , though you can also check yourself for a particular future you are considering buying, the main complication is factoring in dividend prices)
  • Wei Dai has recently been looking into box spread financing which were around 0.55% for 3 years, 0.3% above the short-term treasury rate.
  • If you have a large account, interactive brokers charges benchmark+0.3% interest.

I suspect risk-free + 0.3% is basically the going rate, though I also wouldn't be too surprised if a leveraged ETF could get a slightly better rate.

If you are leveraging as much as described in this post, it seems reasonably important to get at least an OK rate. 1% overhead is large enough that it claws back a significant fraction of the value from leverage (at least if you use more realistic return estimates).

Comment by Paul_Christiano on How Much Leverage Should Altruists Use? · 2020-04-22T22:40:12.316Z · EA · GW

I think it's pretty dangerous to reason "asset X has outperformed recently, so I expect it to outperform in the future." An asset can outperform because it's becoming more expensive, which I think is partly the case here.

This is most obvious in the case of bonds---if 30-year bonds from A are yielding 2%/year and then fall to 1.5%/year over a decade, while 30-year bonds from B are yielding 2%/year and stay at 2%/year, then it will look like the bonds from A are performing about twice as well over the decade. But this is a very bad reason to invest in A. It's anti-inductive not only because of EMH but for the very simple reason that return chasing leads you to buy high and sell low.

This is less straightforward with equities because earnings accounting is (much) less transparent than bond yields, but I think it's a reasonable first pass guess about what's going on (combined with some legitimate update about people becoming more pessimistic about corporate performance/governance/accounting outside of the US). Would be interested in any data contradicting this picture.

I do think that international equities will do worse than US equities after controlling for on-paper earnings. But they have significantly higher on-paper earnings, and I don't really see how to take a bet about which of these effects is larger without getting into way more nitty gritty about exactly what mistake we think which investors are making. If I had to guess I'd bet that US markets are salient to investors in many countries and their recent outperformance has made many people overweight them, so that they will very slightly underperform. But I'd be super interested in good empirical evidence on this front too.

(The RAFI estimates generally look a bit unreasonable to me, and I don't know of an empirical track record or convincing analysis that would make me like them more.)

I personally just hold the market portfolio. So I'm guaranteed to outperform the average of you and Michael Dickens, though I'm not sure which one of you is going to do better than me and which one is going to do worse.

Comment by Paul_Christiano on How worried should I be about a childless Disneyland? · 2019-10-31T20:44:45.070Z · EA · GW

My main point was that in any case what matters are the degree of alignment of the AI systems, and not their consciousness. But I agree with what you are saying.

If our plan for building AI depends on having clarity about our values, then it's important to achieve such clarity before we build AI---whether that's clarity about consciousness, population ethics, what kinds of experience are actually good, how to handle infinities, weird simulation stuff, or whatever else.

I agree consciousness is a big ? in our axiology, though it's not clear if the value you'd lose from saying "only create creatures physiologically identical to humans" is large compared to all the other value we are losing from the other kinds of uncertainty.

I tend to think that in such worlds we are in very deep trouble anyway and won't realize a meaningful amount of value regardless of how well we understand consciousness. So while I may care about them a bit from the perspective of parochial values (like "is Paul happy?") I don't care about them much from the perspective of impartial moral concerns (which is the main perspective where I care about clarifying concepts like consciousness).

Comment by Paul_Christiano on How worried should I be about a childless Disneyland? · 2019-10-30T16:35:57.465Z · EA · GW

I don't think it matters that much (for the long-term) if the AI systems we build in the next century are conscious. What matters is how they think about what possible futures they can bring about.

If AI systems are aligned with us, but turned out not to be conscious or not very conscious, then they would continue this project of figuring out what is morally valuable and so bring about a world we'd regard as good (even though it likely contains very few minds that resemble either us or them).

If AI systems are conscious but not at all aligned with us, then why think that they would create conscious and flourishing successors?

So my view is that alignment is the main AI issue here (and reflecting well is the big non-AI issue), with questions about consciousness being in the giant bag of complex questions we should try to punt to tomorrow.

Comment by Paul_Christiano on Conditional interests, asymmetries and EA priorities · 2019-10-22T16:29:04.735Z · EA · GW
Only Actual Interests: Interests provide reasons for their further satisfaction, but neither an interest nor its satisfaction provides reasons for the existence of that interest over its nonexistence.
It follows from this that a mind with no interests at all is no worse than a mind with interests, regardless of how satisfied its interests might have been. In particular, a joyless mind with no interest in joy is no worse than one with joy. A mind with no interests isn't much of a mind at all, so I would say that this effectively means it's no worse for the mind to not exist.

If you make this argument that "it's no worse for the joyful mind to not exist," you can make an exactly symmetrical argument that "it's not better for the suffering mind to not exist." If there was a suffering mind they'd have an interest in not existing, and if there was a joyful mind they'd have an interest in existing.

In either case, if there is no mind then we have no reason to care about whether the mind exists, and if there is a mind then we have a reason to act---in one case we prefer the mind exist, and in the other case we prefer the mind not exist.

To carry your argument you need an extra principle along the lines of "the existence of unfulfilled interests is bad." Of course that's what's doing all the work of the asymmetry---if unfulfilled interests are bad and fulfilled interests are not good, then existence is bad. But this has nothing to do with actual interests, it's coming from very explicitly setting the zero point at the maximally fulfilled interest.

Comment by Paul_Christiano on Conditional interests, asymmetries and EA priorities · 2019-10-22T16:20:26.216Z · EA · GW
A question here is whether "interests to not suffer" are analogous to "interests in experiencing joy." I believe that Michael's point is that, while we cannot imagine suffering without some kind of interest to have it stop (at least in the moment itself), we can imagine a mind that does not care for further joy.

I don't think that's the relevant analogy though. We should be comparing "Can we imagine suffering without an interest in not having suffered?" to "Can we imagine joy without an interest in having experienced joy?"

Let's say I see a cute squirrel and it makes me happy. Is it bad that I'm not in virtual reality experiencing the greatest joys imagineable?

I can imagine saying "no" here, but if I do then I'd also say it's not good that you are not in a virtual reality experiencing great suffering. If you were in a virtual reality experiencing great joy it would be against your interests to prevent that joy, and if you were in a virtual reality experiencing great suffering it would be in your interests to prevent that suffering.

You could say: the actually existing person has an interest in preventing future suffering, while they may have no interest in experiencing future joy. But now the asymmetry is just coming from the actual person's current interests in joy and suffering, so we didn't need to bring in all of this other machinery, we can just directly appeal to the claimed asymmetry in interests.

Comment by Paul_Christiano on Conditional interests, asymmetries and EA priorities · 2019-10-22T03:59:15.498Z · EA · GW
suffering by its very definition implies an interest in its absence, so there is a reason to prevent it.

If a mind exists and suffers, we'd think it better had it not existed (by virtue of its interest in not suffering). And if a mind exists and experiences joy, we'd think it worse had it not existed (by virtue of its interest in experiencing joy). Prima facie this seem exactly symmetrical, at least as far as the principles laid out here are concerned.

Depending on exactly how you make your view precise, I'd think that we'd either end up not caring at all about whether new minds exist (since if they didn't exist there'd be no relevant interests), or balancing the strength of those interests in some way to end up with a "zero" point where we are indifferent (since minds come with interests in both directions concerning their own existence). I don't yet see how you end up with the asymmetric view here.

Comment by Paul_Christiano on Altruistic equity allocation · 2019-10-17T15:28:33.017Z · EA · GW
would there be a specific metric (e.g. estimated QALYs saved) or would donors construct individual conversion rates (at least implicitly) based on their evaluations of how effective charities are likely to be over their lifetimes?

It would come down to donor predictions, and different donors will generally have quite different predictions (similar to for-profit investing). I agree there is a further difference where donors will also value different outputs differently.

One other advantage of not quantizing the individual contributions of employees is that they can sum up to more than 100% - all twenty employees of an organisation may each believe that they are responsible for at least 10% of its success, which is mathematically inconsistent but may be a useful fiction (and in some sense it could be true - there may be threshold effects such that if any individual employee left the impact of the organisation would actually be 10% worse) - if impact equity is explicitly parceled out, everyone's fractions will sum to 1.

I mostly consider this an advantage of quantifying :)

(I also think that impacts should sum to 1, not >1---in the sense that a project is worthwhile iff there is a way of allocating its impact that makes everyone happy, modulo the issue where you may need to separate impact into tranches for unaligned employees who value different parts of that impact.)

However, it might also lead to discontent if employees don't consider the impact equity allocations to be fair (whether between different employees, between employees and founders, or between employees and investors).

This seems like a real downside.

Comment by Paul_Christiano on The Future of Earning to Give · 2019-10-14T15:42:37.837Z · EA · GW
Of course, you could enter a donor lottery and, if you win, just give it all to an EA fund without doing any research yourself. I don't know if this would be better or worse than just donating directly to the EA funds.

It seems to me like this is unlikely to be worse. Is there some mechanism you have in mind? Risk-aversion for the EA fund? (Quantitatively that seems like it should matter very little at the scale of $100,000.)

At a minimum, it seems like the EA funds are healthier if their accountability is to a smaller number of larger donors who are better able to think about what they are doing.

In terms of upside from getting to think longer, I don't think it's at all obvious that most donors would decide on EA funds (or on whichever particular EA fund they initially lean towards). And as a norm, I think it's easy for EAs to argue that donor lotteries are an improvement over what most non-EA donors do, while the argument for EA funds comes down a lot to personal trust.

I don't think the argument for economies of scale really applies here, since the grantmakers are already working full-time on research in the areas they're making grants for.

I don't think all of the funds have grantmakers working fulltime on having better views about grantmaking. That said, you can't work fulltime if you win a $100,000 lottery either. I agree you are likely to come down to deciding whose advice to trust and doing meta-level reasoning.

Comment by Paul_Christiano on Are we living at the most influential time in history? · 2019-09-15T22:46:33.132Z · EA · GW

I think the outside view argument for acceleration deserves more weight. Namely:

  • Many measures of "output" track each other reasonably closely: how much energy we can harness, how many people we can feed, GDP in modern times, etc.
  • Output has grown 7-8 orders of magnitude over human history.
  • The rate of growth has itself accelerated by 3-4 orders of magnitude. (And even early human populations would have seemed to grow very fast to an observer watching the prior billion years of life.)
  • It's pretty likely that growth will accelerate by another order of magnitude at some point, given that it's happened 3-4 times before and faster growth seems possible.
  • If growth accelerated by another order of magnitude, a hundred years would be enough time for 9 orders of magnitude of growth (more than has occurred in all of human history).
  • Periods of time with more growth seem to have more economic or technological milestones, even if they are less calendar time.
  • Heuristics like "the next X years are very short relative to history, so probably not much will happen" seem to have a very bad historical track record when X is enough time for lots of growth to occur, and so it seems like a mistake to call them the "outside view."
  • If we go a century without doubling of growth rates, it will be (by far) the most that output has ever grown without significant acceleration.
  • Data is noisy and data modeling is hard, but it is difficult to construct a model of historical growth that doesn't have a significant probability of massive growth within a century.
  • I think the models that are most conservative about future growth are those where stable growth is punctuated by rapid acceleration during "revolutions" (with the agricultural acceleration around 10,000 years ago and the industrial revolution causing continuous acceleration from 1600-1900).
  • On that model human history has had two revolutions, with about two orders of magnitude of growth between them, each of which led to >10x speedup of growth. It seems like we should have a significant probability (certainly >10%) of another revolution occurring within the next order of magnitude of growth, i.e. within the next century.
Comment by Paul_Christiano on Ought: why it matters and ways to help · 2019-07-29T16:35:01.505Z · EA · GW

In-house.

Comment by Paul_Christiano on Age-Weighted Voting · 2019-07-15T15:45:14.972Z · EA · GW
I suspect many people responding to surveys about events which happened 10-30 years ago would be doing so with the aim of influencing the betting markets which affect near future policy.

It would be good to focus on questions for which that's not so bad, because our goal is to measure some kind of general sentiment in the future---if in the future people feel like "we should now do more/less of X" then that's pretty correlated with feeling like we did too little in the past (obviously not perfectly---we may have done too little 30 years ago but overcorrected 10 years ago---but if you are betting about public opinion in the US I don't think you should ever be thinking about that kind of distinction).

E.g. I think this would be OK for:

  • Did we do too much or too little about climate change?
  • Did we have too much or too little immigration of various kinds?
  • Were we too favorable or too unfavorable to unions?
  • Were taxes too high or too low?
  • Is compensating organ at market rates a good idea?

And so forth.

Comment by Paul_Christiano on Age-Weighted Voting · 2019-07-12T16:37:38.710Z · EA · GW

I like the goal of politically empowering future people. Here's another policy with the same goal:

  • Run periodic surveys with retrospective evaluations of policy. For example, each year I can pick some policy decisions from {10, 20, 30} years ago and ask "Was this policy a mistake?", "Did we do too much, or too little?", and so on.
  • Subsidize liquid prediction markets about the results of these surveys in all future years. For example, we can bet about people in 2045's answers to "Did we do too much or too little about climate change in 2015-2025?"
  • We will get to see market odds on what people in 10, 20, or 30 years will say about our current policy decisions. For example, people arguing against a policy can cite facts like "The market expects that in 20 years we will consider this policy to have been a mistake."

This seems particularly politically feasible; a philanthropist can unilaterally set this up for a few million dollars of surveys and prediction market subsidies. You could start by running this kind of poll a few times; then opening a prediction market on next year's poll about policy decisions from a few decades ago; then lengthening the time horizon.

(I'd personally expect this to have a larger impact on future-orientation of policy, if we imagine it getting a fraction of the public buy-in that would be required for changing voting weights.)

Comment by Paul_Christiano on Age-Weighted Voting · 2019-07-12T16:16:14.019Z · EA · GW
It would mitigate intertemporal inconsistency

If different generations have different views, then it seems like we'll have an same inconsistency when we shift power from one generation to the next regardless of when we do it. Under your proposal the change happens when the next generation turns 18-37, but doesn't seem to be lessened. For example, the brexit inconsistency would have been between 20 years ago and today rather than between today and 20 years from now, but it would have been just as large.

In fact I'd expect age-weighting to have more temporal inconsistency overall: in the status quo you average out idiosyncratic variation over multiple generations and swap out 1/3 of people every 20 years, while in your proposal you concentrate most power in a single generation which you completely change every 20 years.

Age and wisdom: [...] As a counterargument, crystallised intelligence increases with age and, though fluid intelligence decreases with age, it seems to me that crystallised intelligence is more important than fluid intelligence for informed voting. 

Another counterargument: older people have also seen firsthand the long-run consequences of one generation's policies and have more time to update about what sources of evidence are reliable. It's not clear to me whether this is a larger or smaller impact than "expect to live through the consequences of policies." I think folk wisdom often involves deference to elders specifically on questions about long-term consequences.

(I personally think that I'm better at picking policies at 30 than 20, and expect to be better still at 40.)

Comment by Paul_Christiano on Confused about AI research as a means of addressing AI risk · 2019-03-17T00:26:18.096Z · EA · GW

Consumers care somewhat about safe cars, and if safety is mostly an externality then legislators may be willing to regulate it, and there are only so many developers and if the moral case is clear enough and the costs low enough then the leaders might all make that investment.

At the other extreme, if you have no idea how to build a safe car, then there is no way that anyone is going to use a safe car no matter how much people care. Success is a combination of making safety easy and getting people to care / regulating / etc.

Here is the post I wrote about this.

If you have "competitive" solutions, then the required social coordination may be fairly mild. As a stylized example, if the leaders in the field are willing to invest in safety, then you could imagine surviving a degree of non-competitiveness in line with the size of their lead (though the situation is a bit messier than that).

Comment by Paul_Christiano on If slow-takeoff AGI is somewhat likely, don't give now · 2019-01-31T02:12:50.310Z · EA · GW
The current price of these companies is already determined by cutthroat competition between hyper-informed investors. If Warren Buffett or Goldman Sachs thinks the market is undervaluing these AI companies, then they'll spend billions bidding up the stock price until they're no longer undervalued.

That sounds like a nice world, but unfortunately I don't think that the market is quite that efficient. (Like the parent, I'm not going to offer any evidence, just express my view.)

You could reply, "then why ain'cha rich?" but it doesn't really work quantitatively for mispricings that would take 10+ years to correct. You could instead ask "then why ain'cha several times richer than you otherwise would be?" but lots of people are in fact several times richer than they otherwise would be after a lifetime of investment. It's not anything mind-blowing or even obvious to an external observer.

"Don't try to beat the market" still seems like a good heuristic, I just think this level of confidence in the financial system is misplaced and "hyper-informed" in particular is really overstating it. (As is "incredibly high prior" elsewhere.)

(ETA: I also agree that if you think you have a special insight about AI, there are likely to be better things to do with it.)

Comment by Paul_Christiano on If slow-takeoff AGI is somewhat likely, don't give now · 2019-01-31T02:05:04.328Z · EA · GW

The same neglect that potentially makes AI investments a good deal can also make AI philanthropy a better deal. If there is a huge AI boom, a prescient investment in AI companies might leave you with a larger share of the world economy---but you'll probably still be a much smaller share of total dollars directed at influencing AI.

That said, I do think this is a reasonable default thing to do with dollars if you are interested in the long term but unimpressed with the current menu of long-termist philanthropy (or expect to be better-informed in the future).