Posts

Altruistic equity allocation 2019-10-16T05:54:49.426Z · score: 79 (30 votes)
Ought: why it matters and ways to help 2019-07-26T01:56:34.037Z · score: 52 (24 votes)
Donor lottery details 2017-01-11T00:52:21.116Z · score: 21 (21 votes)
Integrity for consequentialists 2016-11-14T20:56:27.585Z · score: 38 (34 votes)
What is up with carbon dioxide and cognition? An offer 2016-04-06T01:18:03.612Z · score: 11 (13 votes)
Final Round of the Impact Purchase 2015-12-16T20:28:45.709Z · score: 4 (6 votes)
Impact purchase round 3 2015-06-16T17:16:12.858Z · score: 3 (3 votes)
Impact purchase: changes and round 2 2015-04-20T20:52:29.894Z · score: 3 (3 votes)
$10k of Experimental EA Funding 2015-02-25T19:54:29.881Z · score: 19 (19 votes)
Economic altruism 2014-12-05T00:51:44.715Z · score: 5 (7 votes)
Certificates of impact 2014-11-11T05:22:42.438Z · score: 24 (15 votes)
On Progress and Prosperity 2014-10-15T07:03:21.055Z · score: 30 (30 votes)
The best reason to give later 2013-06-14T04:00:31.000Z · score: 0 (0 votes)
Giving now vs. later 2013-03-12T04:00:04.000Z · score: 0 (0 votes)
Risk aversion and investment (for altruists) 2013-02-28T05:00:34.000Z · score: 3 (3 votes)
Why might the future be good? 2013-02-27T05:00:49.000Z · score: 1 (1 votes)
Replaceability 2013-01-22T05:00:52.000Z · score: 1 (1 votes)

Comments

Comment by paul_christiano on How Much Leverage Should Altruists Use? · 2020-05-16T16:30:18.425Z · score: 2 (1 votes) · EA · GW
This is only 2.4 standard deviations assuming returns follow a normal distribution, which they don't.

No, 2.4 standard deviations is 2.4 standard deviations.

It's possible to have distributions for which what's more or less surprising.

For a normal distribution, this happens about one every 200 periods. I totally agree that this isn't a factor of 200 evidence against your view. So maybe saying "falsifies" was too strong.

But no distribution is 2.35 standard deviations below its mean with probability more than 18%. That's literally impossible. And no distribution is 4 standard deviations below its mean with probability >6%. (I'm just adopting your variance estimates here, so I don't think you can really object.)

This is not directly relevant to the investment strategies I talked about above, but if you use the really simple (and well-supported) expected return model of earnings growth plus dividends plus P/E mean reversion and plug in the current numbers for emerging markets, you get 9-11% real return (Research Affiliates gives 9%, I've seen other sources give 11%). This is not a highly concentrated investment of 50 stocks—it's an entire asset class. So I don't think expecting a 9% return is insane.

Have you looked at backtests of this kind of reasoning for emerging markets? Not of total return, I agree that is super noisy, but just the basic return model? I was briefly very optimistic about EM when I started investing, based on arguments like this one, but then when I looked at the data it just seems like it doesn't work out, and there are tons of ways that emerging market companies could be less appealing for investors that could explain a failure of the model. So I ended up just following the market portfolio, and using much more pessimistic returns estimates.

I didn't look into it super deeply. Here's some even more superficial discussion using numbers I pulled while writing this comment.

Over the decade before this crisis, it seems like EM earnings yields were roughly flat around 8%. Dividend yield was <2%. Real dividends were basically flat. Real price return was slightly negative. And I think on top of all of that the volatility was significantly higher than US markets.

Why expect P/E mean reversion to rescue future returns in this case? It seems like EM companies have lots of on-paper earnings, but they neither distribute those to investors (whether as buybacks or dividends) nor use them to grow future earnings. So their current P/E ratios seem justified, and expecting +5%/year returns from P/E mean reversion seems pretty optimistic.

Like I said, I haven't looked into this deeply, so I'm totally open to someone pointing out that actually the naive return model has worked OK in emerging markets after correcting for some important non-obvious stuff (or even just walking through the above analysis more carefully), and so we should just take the last 10 years of underperformance as evidence that now is a particularly good time to get in. But right now that's not my best guess, much less strongly supported enough that I want to take a big anti-EMH position on it (not to mention that betting against beta is one of the factors that seems most plausible to me and seems best documented, and EM is on the other side of that trade).

which explain why the authors believe their particular implementations of momentum and value have (slightly) better expected return.

I'm willing to believe that, though I'm skeptical that they get enough to pay for their +2% fees.

I don't overly trust backtests, but I trust the process behind VMOT, which is (part of the) reason to believe the cited backtest is reflective of the strategy's long-term performance.[2] VMOT projected returns were based on a 20-year backtest, but you can find similar numbers by looking at much longer data series

The markets today are a lot different from the markets 20 years ago. The problem isn't just that the backtests are typically underpowered, it's that markets become more sophisticated, and everyone gets to see that data. You write:

RAFI believes the value and momentum premia will work as well in the future as they have in the past, and some of the papers I linked above make similar claims. They offer good support for this claim, but in the interest of conservatism, we could justifiably subtract a couple of percentage points from expected return to account for premium degradation.

Having a good argument is one thing---I haven't seen one but also haven't looked that hard, and I'm totally willing to believe that one exists and I think it's reasonable to invest on the basis of such arguments. I also believe that premia won't completely dry up because smart investors won't want the extra volatility if the returns aren't there (and lots of people chasing a premium will add premium-specific volatility).

But without a good argument, subtracting a few percentage points from backtested return isn't conservative. That's probably what you should do with a good argument.

Comment by paul_christiano on How Much Leverage Should Altruists Use? · 2020-04-25T03:07:22.176Z · score: 4 (2 votes) · EA · GW

I haven't done a deep dive on this but I think futures are better than this analysis makes them look.

Suppose that I'm in the top bracket and pay 23% taxes on futures, and that my ideal position is 2x SPY.

In a tax-free account I could buy SPY and 1x SPY futures, to get (2x SPY - 1x interest).

In a taxable account I can buy 1x SPY and 1.3x SPY futures. Then my after-tax expected return is again (2x SPY - 1x interest).

The catch is that if I lose money, some of my wealth will take the form of taxable losses that I can use to offset gains in future years. This has a small problem and a bigger problem:

  • Small problem: it may be some years before I can use up those taxable losses. So I'll effectively pay interest on the money over those years. If real rates were 2% and I had to wait 5 years on average to return to my high-water mark, then this would be an effective tax rate of (2% * 5 years) * (23%) ~ 2.3%. I think that's conservative, and this is mostly negligible.
  • Large problem: if the market goes down enough, I could be left totally broke, and my taxable losses won't do me any good. In particular, if the market went down 52%, then my 2x leveraged portfolio should be down to around 23% of my original net worth, but that will entirely be in the form of taxable losses (losing $100 is like getting a $23 grant, to be redeemed only once I've made enough taxable gains).

So I can't just treat my taxable losses as wealth for the purpose of computing leverage. I don't know exactly what the right strategy is, it's probably quite complicated.

The simplest solution is to just ignore them when setting my desired level of leverage. If you do that, and are careful about rebalancing, it seems like you shouldn't lose very much to taxes in log-expectation (e.g. if the market is down 50%, I think you'd end up with about half of your desired leverage, which is similar to a 25% tax rate). But I'd like to work it out, since other than this futures seem appealing.

Comment by paul_christiano on How Much Leverage Should Altruists Use? · 2020-04-23T21:23:24.989Z · score: 4 (3 votes) · EA · GW

I'm surprised by (and suspicious of) the claim about so many more international shares being non-tradeable, but it would change my view.

I would guess the savings rate thing is relatively small compared to the fact that a much larger fraction of US GDP is inevestable in the stock market---the US is 20-25% of GDP, but the US is 40% of total stock market capitalization and I think US corporate profits are also ballpark 40% of all publicly traded corporate profits. So if everyone saved the same amount and invested in their home country, US equities would be too cheap.

I agree that under EMH the two bonds A and B are basically the same, so it's neutral. But it's a prima facie reason that A is going to perform worse (not a prima facie reason it will perform better) and it's now pretty murky whether the market is going to err one way or the other.

I'm still pretty skeptical of US equities outperforming, but I'll think about it more.

I haven't thought about the diversification point that much. I don't think that you can just use the empirical daily correlations for the purpose of estimating this, but maybe you can (until you observe them coming apart). It's hard to see how you can be so uncertain about the relative performance of A and B, but still think they are virtually perfectly correlated (but again, that may just be a misleading intuition). I'm going to spend a bit of time with historical data to get a feel for this sometime and will postpone judgment until after doing that.

Comment by paul_christiano on How Much Leverage Should Altruists Use? · 2020-04-23T21:12:58.317Z · score: 4 (2 votes) · EA · GW

I also like GMP, and find the paper kind of surprising. I checked the endpoints stuff a bit and it seems like it can explain a small effect but not a huge one. My best guess is that going from equities to GMP is worth like +1-2% risk-free returns.

Comment by paul_christiano on How Much Leverage Should Altruists Use? · 2020-04-22T23:53:08.039Z · score: 21 (4 votes) · EA · GW

I like the basic point about leverage and think it's quite robust.

But I think the projected returns for VMOT+MF are insane. And as a result the 8x leverage recommendation is insane, someone who does that is definitely just going to go broke. (This is similar to Carl's complaint.)

My biggest problem with this estimate is that it kind of sounds crazy and I don't know very good evidence in favor. But it seems like these claimed returns are so high that you can also basically falsify them by looking at the data between when VMOT was founded and when you wrote this post.

VMOT is down 20% in the last 3 years. This estimate would expect returns of 27% +- 20% over that period, so you're like 2.4 standard deviations down.

When you wrote this post, before the crisis, VMOT was only like 1.4 standard deviations below your expectations. so maybe we should be more charitable?

But that's just because it was a period of surprisingly high market returns. VMOT lagged VT by more than 35% between its inception and when you wrote this post, whereas this methodology expects it to outperform by more than 12% over that period. VMOT/VT are positively correlated, and based on your numbers it looks like the stdev of excess performance should be <10%. So that's like 4-5 standard deviations of surprising bad performance already.

Is something wrong with this analysis?

If that's right, I definitely object to the methodology "take an absurd backtest that we've already falsified out of sample, then cut a few percentage points off and call it conservative." In this case it looks like even the "conservative" estimate is basically falsified.

Comment by paul_christiano on How Much Leverage Should Altruists Use? · 2020-04-22T23:22:19.139Z · score: 2 (1 votes) · EA · GW
We could account for this by treating mean return and standard deviation as distributions rather than point estimates, and calculating utility-maximizing leverage across the distribution instead of at a single point. This raises a further concern that we don’t even know what distribution the mean and standard deviation have, but at least this gets us closer to an accurate model.

Why not just take the actual mean and standard deviation, averaging across the whole distribution of models?

What exactly is the "mean" you are quoting, if it's not your subjective expectation of returns?

(Also, I think the costs of choosing leverage wrong are pretty symmetric.)

Comment by paul_christiano on How Much Leverage Should Altruists Use? · 2020-04-22T23:19:39.640Z · score: 8 (2 votes) · EA · GW

My understanding is that the sharpe ratio of the global portfolio is quite similar to the equity portfolio (e.g. see here for data on the period from 1960-2017, finding 0.36 for the global market and 0.37 for equities).

I still do expect the broad market to outperform equities alone, but I don't know where the super-high estimates for the benefits of diversification are coming from, and I expect the effect to be much more modest then the one described in the linked post by Ben Todd. Do you know what's up with the discrepancy? It could be about choice of time periods or some technical detail, but it's kind fo a big discrepancy. (My best guess is an error in the linked post.)

Comment by paul_christiano on How Much Leverage Should Altruists Use? · 2020-04-22T23:09:35.879Z · score: 5 (3 votes) · EA · GW
To use leverage, you will probably end up having to pay about 1% on top of short-term interest rates

Not a huge deal, but it seems like the typical overhead is about 0.3%:

  • This seems to be the implicit rate I pay if I buy equity futures rather than holding physical equities (a historical survey: http://cdar.berkeley.edu/wp-content/uploads/2016/12/futures-gunther-etal-111616.pdf , though you can also check yourself for a particular future you are considering buying, the main complication is factoring in dividend prices)
  • Wei Dai has recently been looking into box spread financing which were around 0.55% for 3 years, 0.3% above the short-term treasury rate.
  • If you have a large account, interactive brokers charges benchmark+0.3% interest.

I suspect risk-free + 0.3% is basically the going rate, though I also wouldn't be too surprised if a leveraged ETF could get a slightly better rate.

If you are leveraging as much as described in this post, it seems reasonably important to get at least an OK rate. 1% overhead is large enough that it claws back a significant fraction of the value from leverage (at least if you use more realistic return estimates).

Comment by paul_christiano on How Much Leverage Should Altruists Use? · 2020-04-22T22:40:12.316Z · score: 5 (3 votes) · EA · GW

I think it's pretty dangerous to reason "asset X has outperformed recently, so I expect it to outperform in the future." An asset can outperform because it's becoming more expensive, which I think is partly the case here.

This is most obvious in the case of bonds---if 30-year bonds from A are yielding 2%/year and then fall to 1.5%/year over a decade, while 30-year bonds from B are yielding 2%/year and stay at 2%/year, then it will look like the bonds from A are performing about twice as well over the decade. But this is a very bad reason to invest in A. It's anti-inductive not only because of EMH but for the very simple reason that return chasing leads you to buy high and sell low.

This is less straightforward with equities because earnings accounting is (much) less transparent than bond yields, but I think it's a reasonable first pass guess about what's going on (combined with some legitimate update about people becoming more pessimistic about corporate performance/governance/accounting outside of the US). Would be interested in any data contradicting this picture.

I do think that international equities will do worse than US equities after controlling for on-paper earnings. But they have significantly higher on-paper earnings, and I don't really see how to take a bet about which of these effects is larger without getting into way more nitty gritty about exactly what mistake we think which investors are making. If I had to guess I'd bet that US markets are salient to investors in many countries and their recent outperformance has made many people overweight them, so that they will very slightly underperform. But I'd be super interested in good empirical evidence on this front too.

(The RAFI estimates generally look a bit unreasonable to me, and I don't know of an empirical track record or convincing analysis that would make me like them more.)

I personally just hold the market portfolio. So I'm guaranteed to outperform the average of you and Michael Dickens, though I'm not sure which one of you is going to do better than me and which one is going to do worse.

Comment by paul_christiano on How worried should I be about a childless Disneyland? · 2019-10-31T20:44:45.070Z · score: 6 (3 votes) · EA · GW

My main point was that in any case what matters are the degree of alignment of the AI systems, and not their consciousness. But I agree with what you are saying.

If our plan for building AI depends on having clarity about our values, then it's important to achieve such clarity before we build AI---whether that's clarity about consciousness, population ethics, what kinds of experience are actually good, how to handle infinities, weird simulation stuff, or whatever else.

I agree consciousness is a big ? in our axiology, though it's not clear if the value you'd lose from saying "only create creatures physiologically identical to humans" is large compared to all the other value we are losing from the other kinds of uncertainty.

I tend to think that in such worlds we are in very deep trouble anyway and won't realize a meaningful amount of value regardless of how well we understand consciousness. So while I may care about them a bit from the perspective of parochial values (like "is Paul happy?") I don't care about them much from the perspective of impartial moral concerns (which is the main perspective where I care about clarifying concepts like consciousness).

Comment by paul_christiano on How worried should I be about a childless Disneyland? · 2019-10-30T16:35:57.465Z · score: 14 (11 votes) · EA · GW

I don't think it matters that much (for the long-term) if the AI systems we build in the next century are conscious. What matters is how they think about what possible futures they can bring about.

If AI systems are aligned with us, but turned out not to be conscious or not very conscious, then they would continue this project of figuring out what is morally valuable and so bring about a world we'd regard as good (even though it likely contains very few minds that resemble either us or them).

If AI systems are conscious but not at all aligned with us, then why think that they would create conscious and flourishing successors?

So my view is that alignment is the main AI issue here (and reflecting well is the big non-AI issue), with questions about consciousness being in the giant bag of complex questions we should try to punt to tomorrow.

Comment by paul_christiano on Conditional interests, asymmetries and EA priorities · 2019-10-22T16:29:04.735Z · score: 8 (4 votes) · EA · GW
Only Actual Interests: Interests provide reasons for their further satisfaction, but neither an interest nor its satisfaction provides reasons for the existence of that interest over its nonexistence.
It follows from this that a mind with no interests at all is no worse than a mind with interests, regardless of how satisfied its interests might have been. In particular, a joyless mind with no interest in joy is no worse than one with joy. A mind with no interests isn't much of a mind at all, so I would say that this effectively means it's no worse for the mind to not exist.

If you make this argument that "it's no worse for the joyful mind to not exist," you can make an exactly symmetrical argument that "it's not better for the suffering mind to not exist." If there was a suffering mind they'd have an interest in not existing, and if there was a joyful mind they'd have an interest in existing.

In either case, if there is no mind then we have no reason to care about whether the mind exists, and if there is a mind then we have a reason to act---in one case we prefer the mind exist, and in the other case we prefer the mind not exist.

To carry your argument you need an extra principle along the lines of "the existence of unfulfilled interests is bad." Of course that's what's doing all the work of the asymmetry---if unfulfilled interests are bad and fulfilled interests are not good, then existence is bad. But this has nothing to do with actual interests, it's coming from very explicitly setting the zero point at the maximally fulfilled interest.

Comment by paul_christiano on Conditional interests, asymmetries and EA priorities · 2019-10-22T16:20:26.216Z · score: 4 (2 votes) · EA · GW
A question here is whether "interests to not suffer" are analogous to "interests in experiencing joy." I believe that Michael's point is that, while we cannot imagine suffering without some kind of interest to have it stop (at least in the moment itself), we can imagine a mind that does not care for further joy.

I don't think that's the relevant analogy though. We should be comparing "Can we imagine suffering without an interest in not having suffered?" to "Can we imagine joy without an interest in having experienced joy?"

Let's say I see a cute squirrel and it makes me happy. Is it bad that I'm not in virtual reality experiencing the greatest joys imagineable?

I can imagine saying "no" here, but if I do then I'd also say it's not good that you are not in a virtual reality experiencing great suffering. If you were in a virtual reality experiencing great joy it would be against your interests to prevent that joy, and if you were in a virtual reality experiencing great suffering it would be in your interests to prevent that suffering.

You could say: the actually existing person has an interest in preventing future suffering, while they may have no interest in experiencing future joy. But now the asymmetry is just coming from the actual person's current interests in joy and suffering, so we didn't need to bring in all of this other machinery, we can just directly appeal to the claimed asymmetry in interests.

Comment by paul_christiano on Conditional interests, asymmetries and EA priorities · 2019-10-22T03:59:15.498Z · score: 12 (7 votes) · EA · GW
suffering by its very definition implies an interest in its absence, so there is a reason to prevent it.

If a mind exists and suffers, we'd think it better had it not existed (by virtue of its interest in not suffering). And if a mind exists and experiences joy, we'd think it worse had it not existed (by virtue of its interest in experiencing joy). Prima facie this seem exactly symmetrical, at least as far as the principles laid out here are concerned.

Depending on exactly how you make your view precise, I'd think that we'd either end up not caring at all about whether new minds exist (since if they didn't exist there'd be no relevant interests), or balancing the strength of those interests in some way to end up with a "zero" point where we are indifferent (since minds come with interests in both directions concerning their own existence). I don't yet see how you end up with the asymmetric view here.

Comment by paul_christiano on Altruistic equity allocation · 2019-10-17T15:28:33.017Z · score: 4 (3 votes) · EA · GW
would there be a specific metric (e.g. estimated QALYs saved) or would donors construct individual conversion rates (at least implicitly) based on their evaluations of how effective charities are likely to be over their lifetimes?

It would come down to donor predictions, and different donors will generally have quite different predictions (similar to for-profit investing). I agree there is a further difference where donors will also value different outputs differently.

One other advantage of not quantizing the individual contributions of employees is that they can sum up to more than 100% - all twenty employees of an organisation may each believe that they are responsible for at least 10% of its success, which is mathematically inconsistent but may be a useful fiction (and in some sense it could be true - there may be threshold effects such that if any individual employee left the impact of the organisation would actually be 10% worse) - if impact equity is explicitly parceled out, everyone's fractions will sum to 1.

I mostly consider this an advantage of quantifying :)

(I also think that impacts should sum to 1, not >1---in the sense that a project is worthwhile iff there is a way of allocating its impact that makes everyone happy, modulo the issue where you may need to separate impact into tranches for unaligned employees who value different parts of that impact.)

However, it might also lead to discontent if employees don't consider the impact equity allocations to be fair (whether between different employees, between employees and founders, or between employees and investors).

This seems like a real downside.

Comment by paul_christiano on The Future of Earning to Give · 2019-10-14T15:42:37.837Z · score: 33 (9 votes) · EA · GW
Of course, you could enter a donor lottery and, if you win, just give it all to an EA fund without doing any research yourself. I don't know if this would be better or worse than just donating directly to the EA funds.

It seems to me like this is unlikely to be worse. Is there some mechanism you have in mind? Risk-aversion for the EA fund? (Quantitatively that seems like it should matter very little at the scale of $100,000.)

At a minimum, it seems like the EA funds are healthier if their accountability is to a smaller number of larger donors who are better able to think about what they are doing.

In terms of upside from getting to think longer, I don't think it's at all obvious that most donors would decide on EA funds (or on whichever particular EA fund they initially lean towards). And as a norm, I think it's easy for EAs to argue that donor lotteries are an improvement over what most non-EA donors do, while the argument for EA funds comes down a lot to personal trust.

I don't think the argument for economies of scale really applies here, since the grantmakers are already working full-time on research in the areas they're making grants for.

I don't think all of the funds have grantmakers working fulltime on having better views about grantmaking. That said, you can't work fulltime if you win a $100,000 lottery either. I agree you are likely to come down to deciding whose advice to trust and doing meta-level reasoning.

Comment by paul_christiano on Are we living at the most influential time in history? · 2019-09-15T22:46:33.132Z · score: 45 (21 votes) · EA · GW

I think the outside view argument for acceleration deserves more weight. Namely:

  • Many measures of "output" track each other reasonably closely: how much energy we can harness, how many people we can feed, GDP in modern times, etc.
  • Output has grown 7-8 orders of magnitude over human history.
  • The rate of growth has itself accelerated by 3-4 orders of magnitude. (And even early human populations would have seemed to grow very fast to an observer watching the prior billion years of life.)
  • It's pretty likely that growth will accelerate by another order of magnitude at some point, given that it's happened 3-4 times before and faster growth seems possible.
  • If growth accelerated by another order of magnitude, a hundred years would be enough time for 9 orders of magnitude of growth (more than has occurred in all of human history).
  • Periods of time with more growth seem to have more economic or technological milestones, even if they are less calendar time.
  • Heuristics like "the next X years are very short relative to history, so probably not much will happen" seem to have a very bad historical track record when X is enough time for lots of growth to occur, and so it seems like a mistake to call them the "outside view."
  • If we go a century without doubling of growth rates, it will be (by far) the most that output has ever grown without significant acceleration.
  • Data is noisy and data modeling is hard, but it is difficult to construct a model of historical growth that doesn't have a significant probability of massive growth within a century.
  • I think the models that are most conservative about future growth are those where stable growth is punctuated by rapid acceleration during "revolutions" (with the agricultural acceleration around 10,000 years ago and the industrial revolution causing continuous acceleration from 1600-1900).
  • On that model human history has had two revolutions, with about two orders of magnitude of growth between them, each of which led to >10x speedup of growth. It seems like we should have a significant probability (certainly >10%) of another revolution occurring within the next order of magnitude of growth, i.e. within the next century.
Comment by paul_christiano on Ought: why it matters and ways to help · 2019-07-29T16:35:01.505Z · score: 10 (6 votes) · EA · GW

In-house.

Comment by paul_christiano on Age-Weighted Voting · 2019-07-15T15:45:14.972Z · score: 4 (2 votes) · EA · GW
I suspect many people responding to surveys about events which happened 10-30 years ago would be doing so with the aim of influencing the betting markets which affect near future policy.

It would be good to focus on questions for which that's not so bad, because our goal is to measure some kind of general sentiment in the future---if in the future people feel like "we should now do more/less of X" then that's pretty correlated with feeling like we did too little in the past (obviously not perfectly---we may have done too little 30 years ago but overcorrected 10 years ago---but if you are betting about public opinion in the US I don't think you should ever be thinking about that kind of distinction).

E.g. I think this would be OK for:

  • Did we do too much or too little about climate change?
  • Did we have too much or too little immigration of various kinds?
  • Were we too favorable or too unfavorable to unions?
  • Were taxes too high or too low?
  • Is compensating organ at market rates a good idea?

And so forth.

Comment by paul_christiano on Age-Weighted Voting · 2019-07-12T16:37:38.710Z · score: 77 (30 votes) · EA · GW

I like the goal of politically empowering future people. Here's another policy with the same goal:

  • Run periodic surveys with retrospective evaluations of policy. For example, each year I can pick some policy decisions from {10, 20, 30} years ago and ask "Was this policy a mistake?", "Did we do too much, or too little?", and so on.
  • Subsidize liquid prediction markets about the results of these surveys in all future years. For example, we can bet about people in 2045's answers to "Did we do too much or too little about climate change in 2015-2025?"
  • We will get to see market odds on what people in 10, 20, or 30 years will say about our current policy decisions. For example, people arguing against a policy can cite facts like "The market expects that in 20 years we will consider this policy to have been a mistake."

This seems particularly politically feasible; a philanthropist can unilaterally set this up for a few million dollars of surveys and prediction market subsidies. You could start by running this kind of poll a few times; then opening a prediction market on next year's poll about policy decisions from a few decades ago; then lengthening the time horizon.

(I'd personally expect this to have a larger impact on future-orientation of policy, if we imagine it getting a fraction of the public buy-in that would be required for changing voting weights.)

Comment by paul_christiano on Age-Weighted Voting · 2019-07-12T16:16:14.019Z · score: 31 (13 votes) · EA · GW
It would mitigate intertemporal inconsistency

If different generations have different views, then it seems like we'll have an same inconsistency when we shift power from one generation to the next regardless of when we do it. Under your proposal the change happens when the next generation turns 18-37, but doesn't seem to be lessened. For example, the brexit inconsistency would have been between 20 years ago and today rather than between today and 20 years from now, but it would have been just as large.

In fact I'd expect age-weighting to have more temporal inconsistency overall: in the status quo you average out idiosyncratic variation over multiple generations and swap out 1/3 of people every 20 years, while in your proposal you concentrate most power in a single generation which you completely change every 20 years.

Age and wisdom: [...] As a counterargument, crystallised intelligence increases with age and, though fluid intelligence decreases with age, it seems to me that crystallised intelligence is more important than fluid intelligence for informed voting. 

Another counterargument: older people have also seen firsthand the long-run consequences of one generation's policies and have more time to update about what sources of evidence are reliable. It's not clear to me whether this is a larger or smaller impact than "expect to live through the consequences of policies." I think folk wisdom often involves deference to elders specifically on questions about long-term consequences.

(I personally think that I'm better at picking policies at 30 than 20, and expect to be better still at 40.)

Comment by paul_christiano on Confused about AI research as a means of addressing AI risk · 2019-03-17T00:26:18.096Z · score: 6 (3 votes) · EA · GW

Consumers care somewhat about safe cars, and if safety is mostly an externality then legislators may be willing to regulate it, and there are only so many developers and if the moral case is clear enough and the costs low enough then the leaders might all make that investment.

At the other extreme, if you have no idea how to build a safe car, then there is no way that anyone is going to use a safe car no matter how much people care. Success is a combination of making safety easy and getting people to care / regulating / etc.

Here is the post I wrote about this.

If you have "competitive" solutions, then the required social coordination may be fairly mild. As a stylized example, if the leaders in the field are willing to invest in safety, then you could imagine surviving a degree of non-competitiveness in line with the size of their lead (though the situation is a bit messier than that).

Comment by paul_christiano on If slow-takeoff AGI is somewhat likely, don't give now · 2019-01-31T02:12:50.310Z · score: 16 (5 votes) · EA · GW
The current price of these companies is already determined by cutthroat competition between hyper-informed investors. If Warren Buffett or Goldman Sachs thinks the market is undervaluing these AI companies, then they'll spend billions bidding up the stock price until they're no longer undervalued.

That sounds like a nice world, but unfortunately I don't think that the market is quite that efficient. (Like the parent, I'm not going to offer any evidence, just express my view.)

You could reply, "then why ain'cha rich?" but it doesn't really work quantitatively for mispricings that would take 10+ years to correct. You could instead ask "then why ain'cha several times richer than you otherwise would be?" but lots of people are in fact several times richer than they otherwise would be after a lifetime of investment. It's not anything mind-blowing or even obvious to an external observer.

"Don't try to beat the market" still seems like a good heuristic, I just think this level of confidence in the financial system is misplaced and "hyper-informed" in particular is really overstating it. (As is "incredibly high prior" elsewhere.)

(ETA: I also agree that if you think you have a special insight about AI, there are likely to be better things to do with it.)

Comment by paul_christiano on If slow-takeoff AGI is somewhat likely, don't give now · 2019-01-31T02:05:04.328Z · score: 7 (2 votes) · EA · GW

The same neglect that potentially makes AI investments a good deal can also make AI philanthropy a better deal. If there is a huge AI boom, a prescient investment in AI companies might leave you with a larger share of the world economy---but you'll probably still be a much smaller share of total dollars directed at influencing AI.

That said, I do think this is a reasonable default thing to do with dollars if you are interested in the long term but unimpressed with the current menu of long-termist philanthropy (or expect to be better-informed in the future).

Comment by paul_christiano on Announcing an updated drawing protocol for the EffectiveAltruism.org donor lotteries · 2019-01-25T18:20:31.614Z · score: 4 (3 votes) · EA · GW

Trusting random.org doesn't seem so bad (probably a bit better than trusting IRIS, since IRIS isn't in the business of claiming to be non-manipulable). I don't know if they support arbitrary winning probabilities for draws, but probably there is some way to make it work.

(That does seem strictly worse than hashing powerball numbers though, which seem more trustworthy than random.org and easier to get.)

Comment by paul_christiano on Announcing an updated drawing protocol for the EffectiveAltruism.org donor lotteries · 2019-01-25T18:01:53.688Z · score: 2 (1 votes) · EA · GW

I'm not sure what the myriad of more responsible ways are. If you trust CEA to not mess with the lottery more than you trust IRIS not to change their earthquake reports to mess with the lottery, then just having CEA pick numbers out of a hat could be better.

It definitely seems like free-riding on some other public lottery drawing that people already trust might be better.

Comment by paul_christiano on Announcing an updated drawing protocol for the EffectiveAltruism.org donor lotteries · 2019-01-25T17:54:59.160Z · score: 3 (2 votes) · EA · GW

There is plenty of entropy in the API responses, that's not the worst concern.

I think the most serious question is whether a participant can influence the lottery draw (e.g. by getting IRIS to change low order digits of the reported latitude or longitude).

Comment by paul_christiano on How to improve EA Funds · 2018-04-14T01:39:28.025Z · score: 4 (4 votes) · EA · GW

In general I feel like donor lotteries should be preferred as a default over small donations to EA funds (winners can ultimately donate to EA funds if they decide that's the best option).

What are the best arguments in favor of EA funds as a recommendation over lotteries? Looking more normal?

(Currently there are no active lotteries, this is not a recommendation for short-term donations.)

Comment by paul_christiano on Economics, prioritisation, and pro-rich bias   · 2018-01-06T20:23:52.817Z · score: 1 (1 votes) · EA · GW

This standard of betterness is all you need to conclude: "every inefficient outcome is worse than some efficient outcome."

Comment by paul_christiano on Economics, prioritisation, and pro-rich bias   · 2018-01-06T20:21:44.898Z · score: 2 (2 votes) · EA · GW

If they endorsed the view you say they do with respect to scalping, wouldn't they say "provided there was perfectly equitable distribution of incomes, scalping ensures that goods go to those who value them most". Missing out the first bit gives an extremely misleading impression of their view, doesn't it?

When economists say "how much do you value X" they are usually using the dictionary definition of value as "estimate the monetary worth." Economists understand that valuing something involves an implicit denominator and "who values most" will depend on the choice of denominator. You get approximately the same ordering for any denominator which can be easily transferred between people, and when they say "A values X more than B" they mean in that common ordering. Economists understand that that sense of value isn't synonymous with moral value (which can't be easily transferred between people).

The reason that easily transferrable goods serve as a good denominator is because at the optimal outcome they should exactly track whatever the planner cares about (otherwise we could transfer them).

Expressing economists' actual view would take several additional sentences. The quote seems like a reasonable concise simplification.

Your version isn't true: an equitable distribution of incomes doesn't imply that everyone has roughly the same utility per marginal dollar. A closer formulation would be "Supposing that the policy-maker is roughly indifferent between giving a dollar to each person [e.g. as would be the case if the policy-maker has adopted roughly optimal policies in other domains, since dollars can be easily transferred between people] then scalping will ensure that the ticket goes to the person who the policy-maker would most prefer have it."

Immediately before your quote from Mankiw's book, he says "Equity involves normative judgments that go beyond the realm of economics and enter into the realm of political philosophy. We concentrate on efficiency as the social planner's goal. Keep in mind, however, that real policy-makers often care about equity as well." I agree the discussion is offensively simplified because it's a 101 textbook, but don't think this is evidence of fundamental confusion. If we read "equity" as "has the same marginal utility from a dollar" then this seems pretty in line with the utilitarian position.

Comment by Paul_Christiano on [deleted post] 2018-01-05T09:58:00.100Z

It's on my blog. I don't think the scheme works, and in general it seems any scheme introduces incentives to not look like a beneficiary. If I were to do this now, I would just run a prediction market on the total # of donations, have the match success level go from 50% to 100% over the spread, and use a small fraction of proceeds to place N buy and sell orders against the final book.

Comment by paul_christiano on Economics, prioritisation, and pro-rich bias   · 2018-01-03T18:11:59.212Z · score: 3 (3 votes) · EA · GW

Economists who accept your crucial premise would necessarily think that there should be no redistribution at all, since the net effect of redistribution is to move goods from people who were originally willing to pay more to people who were originally willing to pay less. But "redistribution is always morally bad" is an extreme outlier view amongst economists.

See for example the IGM poll on the minimum wage, where there is significant support for small increases to the minimum wage despite acknowledgment of the allocative inefficiency. The question most economists ask is "is this an efficient way to redistribute wealth? do the benefits justify the costs?" They don't consider the case settled because it decreases allocative efficiency (as it obviously does).

I don't think it would be that hard to find lots of examples of economists defending particular policies on the basis that those willing to pay more should get the good.

People can make that argument as part of a broader principle like "we should give goods to people who are willing to pay most, and redistribute money in the most efficient way we can."

For example, I also often argue that the people willing to pay more should get the good. But I don't accept your crucial premise even a tiny bit. The same is true of the handful of economists I've taken a class from or interacted with at length, and so I'd guess it's the most common view.

Comment by paul_christiano on Economics, prioritisation, and pro-rich bias   · 2018-01-03T18:04:02.967Z · score: 5 (5 votes) · EA · GW

Obviously what is optimal does depend on what we can compel the producer to do; if we can collect taxes, that will obviously be better. If we can compel the producer to suffer small costs to make the world better, there are better things to compel them to do. If we can create an environment in which certain behaviors are more expensive for the producer because they are socially unacceptable, there are better things to deem unacceptable. And so on.

More broadly, as a society we want to pick the most efficient ways to redistribute wealth, and as altruists we'd like to use our policy influence in the most efficient ways to redistribute wealth. Forcing the tickets to sell below market value is an incredibly inefficient way to redistribute wealth. So it can be a good idea in worlds where there are almost no options, but seems very unlikely to be a good idea in practice.

Comment by paul_christiano on Economics, prioritisation, and pro-rich bias   · 2018-01-03T09:24:04.046Z · score: 2 (2 votes) · EA · GW

In actual fact, they are appealing to preference utilitarianism. This is a moral theory.

Economists are quite often appealing to a much simpler account of betterness: if everyone prefers option A to option B, then option A is better than option B.

Comment by paul_christiano on Economics, prioritisation, and pro-rich bias   · 2018-01-03T09:13:52.706Z · score: 6 (6 votes) · EA · GW

Here is a stronger version of the pro-market-price argument:

  • The producer could sell a ticket for $1000 to Rich and then give $950 to Pete. This leaves both Rich and Pete better off, often very substantially.
  • In reality, Pete is not an optimal target for philanthropy, and so the producer could do even better by selling the ticket for $1000 to Rich and then giving to their preferred charity.
  • No matter what the producer wants, they can do better by selling the ticket at market price. And no matter what we want as advocates for a policy, we can do better by allowing them to. (In fact the world is complicated and it's not this clean, but that seems orthogonal to your objection.)

This is still not the strongest argument that can be made, but it's better than the argument from your crucial premise. I think there are few serious economists who accept your crucial premise in the way you mean it, though many might use it as a definition of welfare (but wouldn't consider total welfare synonymous with moral good).

Comment by paul_christiano on Announcing the 2017 donor lottery · 2017-12-22T04:58:47.521Z · score: 2 (2 votes) · EA · GW

What are the biggest upsides of transparency?

The actual value of the information produced seems modest.

Comment by paul_christiano on Announcing the 2017 donor lottery · 2017-12-18T06:41:14.972Z · score: 0 (0 votes) · EA · GW

You have diminishing returns to money, i.e. your utility vs. money curve is curved down. So a gamble with mean 0 has some cost to you, approximately (curvature) * (variance), that I was referring to as the cost-via-risk. This cost is approximately linear in the variance, and hence quadratic in the block size.

Comment by paul_christiano on Announcing the 2017 donor lottery · 2017-12-17T19:21:15.000Z · score: 6 (8 votes) · EA · GW

A $200k lottery has about 4x as much cost-via-risk as a $100k lottery. Realistically I think that smaller sizes (with the option to lottery up further) are significantly better than bigger pots. As the pot gets bigger you need to do more and more thinking to verify that the risk isn't an issue.

If you were OK with variable pot sizes, I think the thing to do would be:

  • The lottery will be divided up into blocks.
  • Each block will have have the same size, which will be something between $75k and $150k.
  • We provide a backstop only if the total donation is < $75k. Otherwise, we just divide the total up into chunks between $75k and $150k, aiming to be about $100k.
Comment by paul_christiano on Effective Altruism Grants project update · 2017-10-01T16:33:18.144Z · score: 2 (2 votes) · EA · GW

However, I suspect that this intuition was biased (upward), because I more often think in terms of "non-EA money". In non-EA money, CEA time would have a much higher nominal value. But if you think EA money can be used to buy good outcomes very cost-effectively (even at the margin) then $75 could make sense.

Normally people discuss the value of time by figuring out how many dollars they'd spend to save an hour. It's kind of unusual to ask how many dollars you'd have someone else spend so that you save an hour.

Comment by paul_christiano on Capitalism and Selfishness · 2017-09-16T03:17:54.292Z · score: 3 (3 votes) · EA · GW

Finally, capitalism requires a sufficiently self-interested culture such that it can sustain compounding capital accumulation through the sale of ever-greater commodities.

This is a common claim, but seems completely wrong. An economy of perfectly patient agents will accumulate capital much faster than a community that consumes 50% of its output. The patient agents will invest in infrastructure and technology and machines and so on to increase their future wealth.

The capitalists have to maximise productivity through technological innovation, wage repression, and so forth, or they are run into the ground and bankrupted by market competition

In an efficient market, the capitalists earn rents on their capital whatever they do.

Comment by paul_christiano on Nothing Wrong With AI Weapons · 2017-08-29T17:16:46.418Z · score: 9 (7 votes) · EA · GW

That sounds a lot more expensive than bullets. You can already kill someone for a quarter.

The main cost of killing someone with a bullet is labor. The point is that autonomous weapons reduce the labor required.

alter the balance of power between different types of groups in a specific way.

New technologies do often decrease the cost of killing people and increase the number of civilians who can be killed by a group of fixed size (see: guns, explosives, nuclear weapons).

Comment by paul_christiano on Nothing Wrong With AI Weapons · 2017-08-29T04:50:34.939Z · score: 10 (8 votes) · EA · GW

The two arguments I most often hear are:

  • Cheap autonomous weapons could greatly decrease the cost of ending life---within a decade they could easily be the cheapest form of terrorism by far, and may eventually be the cheapest mass destruction in general. Think insect-sized drones carrying toxins or explosive charges that are lethal if detonated inside the skull.

  • The greater the military significance of AI, the more difficult it becomes for states to share information and coordinate regarding its development. This might be bad news for safety.

Comment by paul_christiano on Blood Donation: (Generally) Not That Effective on the Margin · 2017-08-06T18:38:41.910Z · score: 8 (8 votes) · EA · GW

This seems to confuse costs and benefits, I don't understand the analysis. (ETA: the guesstimate makes more sense.)

I'm going to assume that a unit of blood is the amount that a single donor gives in a single session. (ETA: apparently a donation is 0.5 units of red blood cells. The analysis below is correct only if red blood cells are 50% of the value of a donation. I have no idea what the real ratio is. If red blood cells are most of the value, adjust all the values downwards by a factor of 2.)

The cost of donating a unit is perhaps 30 minutes (YMMV), and has nothing to do with 120 pounds. (The cost from having less blood for a while might easily dwarf the time cost, I'm not sure. When I've donated the time cost was significantly below 30 minutes.)

Under the efficient-NHS hypothesis, the value of marginal blood to the healthcare system is 120 pounds. We can convert this to QALYs using the marginal rate of (20,000 pounds / QALY), to get 0.6% of a QALY.

If you value all QALYs equally and think that marginal AMF donations buy them at 130 pounds / QALY, then your value for QALYs should be at most 130 pounds / QALY (otherwise you should just donate more). It should be exactly 130 pounds / QALY if you are an AMF donor (otherwise you should just donate less).

So 0.6% of a QALY should be worth about 0.8 pounds. If it takes 30 minutes to produce a unit of blood which is worth 0.6% of a QALY, then it should be producing value at 1.6 pounds / hour.

If the healthcare system was undervaluing blood by one order of magnitude, this would be 16 pounds / hour. So I think "would have to be undervaluing the effectiveness of blood donations by 2 orders of magnitude" is off by about an order of magnitude.

The reason this seems so inefficient has little to do with EA's quantitative mindset, and everything to do with the utilitarian perspective that all QALYs are equal. The revealed preferences of most EA's imply that they value their QALYs much more highly than those of AMF beneficiaries. Conventional morality suggests that people extend some of their concern for themselves to their peers, which probably leads to much higher values for marginal UK QALYs than for AMF beneficiary QALYs.

I think that for most EAs donating blood is still not worthwhile even according to (suitably quantitatively refined) common-sense morality. But for those who value their time at less than 20 pounds / hour and take the numbers in the OP seriously, I think that "common-sense" morality does strongly endorse donating blood. (Obviously this cutoff is based on my other quantitative views, which I'm not going to get into here).

(Note: I would not be surprised if the numbers in the post are wrong in one way or another, so don't really endorse taking any quantitative conclusions literally rather than as a prompt to investigate the issue more closely. That said, if you are able to investigate this question usefully I suspect you should be earning more than 20 pounds / hour.)

I'm very hesitant about EA's giving up on common-sense morality based on naive utilitarian calculations. In the first place, I don't think that most EA's moral reasoning is sufficiently sophisticated to outweigh simple heuristics like "when there are really big gains from trade, take them" (if society is willing to pay 240 pounds / hour for your time, and you value it at 16 pounds per hour, those are pretty big gains from trade). In the second place, even a naive utilitarian should be concerned that the rest of the world will be uncooperative with and unhappy with utilitarians if we are less altruistic than normal people in the ways that matter to our communities.

Comment by paul_christiano on My current thoughts on MIRI's "highly reliable agent design" work · 2017-07-11T16:04:41.197Z · score: 3 (3 votes) · EA · GW

On capability amplification:

MIRI's traditional goal would allow you to break cognition down into steps that we can describe explicitly and implement on transistors, things like "perform a step of logical deduction," "adjust the probability of this hypothesis," "do a step of backwards chaining," etc. This division does not need to be competitive, but it needs to be reasonably close (close enough to obtain a decisive advantage).

Capability amplification requires breaking cognition down into steps that humans can implement. This decomposition does not need to be competitive, but it needs to be efficient enough that it can be implemented during training. Humans can obviously implement more than transistors, the main difference is that in the agent foundations case you need to figure out every response in advance (but then can have a correspondingly greater reason to think that the decomposition will work / will preserve alignment).

I can talk in more detail about the reduction from (capability amplification --> agent foundations) if it's not clear whether it is possible and it would have an effect on your view.

On competitiveness:

I would prefer be competitive with non-aligned AI, rather than count on forming a singleton, but this isn't really a requirement of my approach. When comparing difficulty of two approaches you should presumably compare the difficulty of achieving a fixed goal with one approach or the other.

On reliability:

On the agent foundations side, it seems like plausible approaches involve figuring out how to peer inside the previously-opaque hypotheses, or understanding what characteristic of hypotheses can lead to catastrophic generalization failures and then excluding those from induction. Both of these seem likely applicable to ML models, though would depend on how exactly they play out.

On the ML side, I think the other promising approaches involve either adversarial training, ensembling / unanimous votes, which could be applied to the agent foundations problem.

Comment by paul_christiano on My current thoughts on MIRI's "highly reliable agent design" work · 2017-07-10T17:37:42.458Z · score: 10 (10 votes) · EA · GW

I agree with this basic point, but I think on the other side there is a large gap in concreteness that makes makes it much easier to usefully criticize my approach (I'm at the stage of actually writing pseudocode and code which we can critique).

So far I think that the problems in my approach will also appear for MIRI's approach. For example:

  • Solomonoff induction or logical inductors have reliability problems that are analogous to reliability problems for machine learning. So to carry out MIRI's agenda either you need to formulate induction differently, or you need to somehow solve these problems. (And as far as I can tell, the most promising approaches to this problem apply both to MIRI's version and the mainstream ML version.) I think Eliezer has long understood this problem and has alluded to it, but it hasn't been the topic of much discussion (I think largely because MIRI/Eliezer have so many other problems on their plates).
  • Capability amplification requires breaking cognitive work down into smaller steps. MIRI's approach also requires such a breakdown. Capability amplification is easier in a simple formal sense (that if you solve the agent foundations you will definitely solve capability amplification, but not the other way around).
  • I've given some concrete definitions of deliberation/extrapolation, and there's been public argument about whether they really capture human values. I think CEV has avoided those criticisms not because it solves the problem, but because it is sufficiently vague that it's hard to criticize along these lines (and there are sufficiently many other problems that this one isn't even at the top of the list). If you want to actually give a satisfying definition of CEV, I feel you are probably going to have to go down the same path that started with this post. I suspect Eliezer has some ideas for how to avoid these problems, but at this point those ideas have been subject to even less public discussion than my approach.

I agree there are further problems in my agenda that will be turned up by my discussion. But I'm not sure there are fewer such problems than for the MIRI agenda, since I think that being closer to concreteness may more than outweigh the smaller amount of discussion.

If you agree that many of my problems also come up eventually for MIRI's agenda, that's good news about the general applicability of MIRI's research (e.g. the reliability problems for Solomonoff induction may provide a good bridge between MIRI's work and mainstream ML), but I think it would also be a good reason to focus on the difficulties that are common to both approaches rather than to problems like decision theory / self-reference / logical uncertainty / naturalistic agents / ontology identification / multi-level world models / etc.

Comment by paul_christiano on My current thoughts on MIRI's "highly reliable agent design" work · 2017-07-08T16:05:16.373Z · score: 8 (8 votes) · EA · GW

You might think that "learning to reason from humans" doesn't accomplish (2) because it makes the AI human-limited. If we want an advanced AI to help us create the kind of world that humans would want "if we knew more, thought faster, were more the people we wished we were" etc. then the approval of actual humans might, at some point, cease to be helpful.

A human can spend an hour on a task, and train an AI to do that task in milliseconds.

Similarly, an aligned AI can spend an hour on a task, and train its successor to do that task in milliseconds.

So you could hope to have a sequence of nice AI's, each significantly smarter than the last, eventually reaching the limits of technology while still reasoning in a way that humans would endorse if they knew more and thought faster.

(This is the kind of approach I've outlined and am working on, and I think that most work along the lines of "learn from human reasoning" will make a similar move.)

Comment by Paul_Christiano on [deleted post] 2017-05-01T14:08:32.579Z

Scott links to this study, which is more convincing. They measure the difference between "physical mild (slap, spank)" and "physical harsh (use weapon, punch, kick)" punishment, with ~10% of children in the latter category. They consider children of twins to control for genetic confounders, and find something like a 0.2 SD effect on measures of behavioral problems at age 25. There is still confounding (e.g. households where parents beat their kids may be worse in other ways), and the effects are smaller and for rarer forms of punishment, but it is getting somewhere.

Comment by Paul_Christiano on [deleted post] 2017-05-01T00:22:04.938Z

The reported correlations between physical punishment and life outcomes, which underlie the headline $3.6 trillion / year figure, seem unlikely to be causal. I only clicked on the first study, but it made very little effort to control for any of the obvious confounders. (The two relevant controls are mother's education and presence of the father.) The confounding is sufficiently obvious and large that the whole exercise seems kind of crazy. On top of that, as far as I can tell, a causal effect of this size would be inconsistent with adoption studies.

It would be natural to either start with the effect on kids' welfare, which seems pretty easy to think about, or else make a much more serious effort to actually figure out the long-term effects.

Comment by paul_christiano on Utopia In The Fog · 2017-03-28T22:53:56.236Z · score: 4 (4 votes) · EA · GW

If you drop the assumption that the agent will be all-powerful and far beyond human intelligence then a lot of AI safety work isn't very applicable anymore, while it increasingly needs to pay attention to multi-agent dynamics

I don't think this is true in very many interesting cases. Do you have examples of what you have in mind? (I might be pulling a no-true-scotsman here, and I could imagine responding to your examples with "well that research was silly anyway.")

Whether or not your system is rebuilding the universe, you want it to be doing what you want it to be doing. Which "multi-agent dynamics" do you think change the technical situation?

the claim isn't that evolution is intrinsically "against" any particular value, it's that it's extremely unlikely to optimize for any particular value, and the failure to do so nearly perfectly is catastrophic

If evolution isn't optimizing for anything, then you are left with the agents' optimization, which is precisely what we wanted. I though you were telling a story about why a community of agents would fail to get what they collectively want. (For example, a failure to solve AI alignment is such a story, as is a situation where "anyone who wants to destroy the world has the option," as is the security dilemma, and so forth.)

Yes, or even implementable in current systems.

We are probably on the same page here. We should figure out how to build AI systems so that they do what we want, and we should start implementing those ideas ASAP (and they should be the kind of ideas for which that makes sense). When trying to figure out whether a system will "do what we want" we should imagine it operating in a world filled with massive numbers of interacting AI systems all built by people with different interests (much like the world is today, but more).

The point you are quoting is not about just any conflict, but the security dilemma and arms races. These do not significantly change with complete information about the consequences of conflict.

You're right.

Unsurprisingly, I have a similar view about the security dilemma (e.g. think about automated arms inspections and treaty enforcement, I don't think the effects of technological progress are at all symmetrical in general). But if someone has a proposed intervention to improve international relations, I'm all for evaluating it on its merits. So maybe we are in agreement here.

Comment by paul_christiano on Utopia In The Fog · 2017-03-28T16:34:18.160Z · score: 12 (12 votes) · EA · GW

It's great to see people thinking about these topics and I agree with many of the sentiments in this post. Now I'm going to write a long comment focusing on those aspects I disagree with. (I think I probably agree with more of this sentiment than most of the people working on alignment, and so I may be unusually happy to shrug off these criticisms.)

Contrasting "multi-agent outcomes" and "superintelligence" seems extremely strange. I think the default expectation is a world full of many superintelligent systems. I'm going to read your use of "superintelligence" as "the emergence of a singleton concurrently with the development of superintelligence."

I don't consider the "single superintelligence" scenario likely, but I don't think that has much effect on the importance of AI alignment research or on the validity of the standard arguments. I do think that the world will gradually move towards being increasingly well-coordinated (and so talking about the world as a single entity will become increasingly reasonable), but I think that we will probably build superintelligent systems long before that process runs its course.

The future looks broadly good in this scenario given approximately utilitarian values and the assumption that ems are conscious, with a large growing population of minds which are optimized for satisfaction and productivity, free of disease and sickness.

On total utilitarian values, the actual experiences of brain emulations (including whether they have any experiences) don't seem very important. What matters are the preferences according to which emulations shape future generations (which will be many orders of magnitude larger).

"freewheeling evolutionary developments, while continuing to produce complex and intelligent forms of organization, lead to the gradual elimination of all forms of being that we care about"

Evolution doesn't really select against what we value, it just selects for agents that want to acquire resources and are patient. This may cut away some of our selfish values, but mostly leaves unchanged our preferences about distant generations.

(Evolution might select for particular values, e.g. if it's impossible to reliably delegate or if it's very expensive to build systems with stable values. But (a) I'd bet against this, and (b) understanding this phenomenon is precisely the alignment problem!)

(I discuss several of these issues here, Carl discusses evolution here.)

Whatever the type of agent, arms races in future technologies would lead to opportunity costs in military expenditures and would interfere with the project of improving welfare. It seems likely that agents designed for security purposes would have preferences and characteristics which fail to optimize for the welfare of themselves and their neighbors. It’s also possible that an arms race would destabilize international systems and act as a catalyst for warfare.

It seems like you are paraphrasing a standard argument for working on AI alignment rather than arguing against it. If there weren't competitive pressure / selection pressure to adopt future AI systems, then alignment would be much less urgent since we could just take our time.

There may be other interventions that improve coordination/peace more broadly, or which improve coordination/peace in particular possible worlds etc., and those should be considered on their merits. It seems totally plausible that some of those projects will be more effective than work on alignment. I'm especially sympathetic to your first suggestion of addressing key questions about what will/could/should happen.

Not only is this a problem on its own, but I see no reason to think that the conditions described above wouldn’t apply for scenarios where AI agents turned out to be the primary actors and decisionmakers rather than transhumans or posthumans.

Over time it seems likely that society will improve our ability to make and enforce deals, to arrive at consensus about the likely consequences of conflict, to understand each others' situations, or to understand what we would believe if we viewed others' private information.

More generally, we would like to avoid destructive conflict and are continuously developing new tools for getting what we want / becoming smarter and better-informed / etc.

And on top of all that, the historical trend seems to basically point to lower and lower levels of violent conflict, though this is in a race with greater and greater technological capacity to destroy stuff.

I would be more than happy to bet that the intensity of conflict declines over the long run. I think the question is just how much we should prioritize pushing it down in the short run.

“the only way to avoid having all human values gradually ground down by optimization-competition is to install a Gardener over the entire universe who optimizes for human values.”

I disagree with this. See my earlier claim that evolution only favors patience.

I do agree that some kinds of coordination problems need to be solved, for example we must avoid blowing up the world. These are similar in kind to the coordination problems we confront today though they will continue to get harder and we will have to be able to solve them better over time---we can't have a cold war each century with increasingly powerful technology.

There is still value in AI safety work... but there are other parts of the picture which need to be explored

This conclusion seems safe, but it would be safe even if you thought that early AI systems will precipitate a singleton (since one still cares a great deal about the dynamics of that transition).

Better systems of machine ethics which don’t require superintelligence to be implemented (as coherent extrapolated volition does)

By "don't require superintelligence to be implemented," do you mean systems of machine ethics that will work even while machines are broadly human level? That will work even if we need to solve alignment prior long before the emergence of a singleton? I'd endorse both of those desiderata.

I think the main difference in alignment work for unipolar vs. multipolar scenarios is how high we draw the bar for "aligned AI," and in particular how closely competitive it must be with unaligned AI. I probably agree with your implicit claim, that they either must be closely competitive or we need new institutional arrangements to avoid trouble.

Rather than having a singleminded focus on averting a particular failure mode

I think the mandate of AI alignment easily covers the failure modes you have in mind here. I think most of the disagreement is about what kinds of considerations will shape the values of future civilizations.

both working on arguments that agents will be linked via a teleological thread where they accurately represent the value functions of their ancestors

At this level of abstraction I don't see how this differs from alignment. I suspect the details differ a lot, in that the alignment community is very focused on the engineering problem of actually building systems that faithfully pursue particular values (and in general I've found that terms like "teleological thread" tend to be linked with persistently low levels of precision).