Posts

Altruistic equity allocation 2019-10-16T05:54:49.426Z · score: 76 (28 votes)
Ought: why it matters and ways to help 2019-07-26T01:56:34.037Z · score: 52 (24 votes)
Donor lottery details 2017-01-11T00:52:21.116Z · score: 21 (21 votes)
Integrity for consequentialists 2016-11-14T20:56:27.585Z · score: 32 (33 votes)
What is up with carbon dioxide and cognition? An offer 2016-04-06T01:18:03.612Z · score: 10 (12 votes)
Final Round of the Impact Purchase 2015-12-16T20:28:45.709Z · score: 4 (6 votes)
Impact purchase round 3 2015-06-16T17:16:12.858Z · score: 3 (3 votes)
Impact purchase: changes and round 2 2015-04-20T20:52:29.894Z · score: 3 (3 votes)
$10k of Experimental EA Funding 2015-02-25T19:54:29.881Z · score: 19 (19 votes)
Economic altruism 2014-12-05T00:51:44.715Z · score: 5 (7 votes)
Certificates of impact 2014-11-11T05:22:42.438Z · score: 21 (14 votes)
On Progress and Prosperity 2014-10-15T07:03:21.055Z · score: 30 (30 votes)
The best reason to give later 2013-06-14T04:00:31.000Z · score: 0 (0 votes)
Giving now vs. later 2013-03-12T04:00:04.000Z · score: 0 (0 votes)
Risk aversion and investment (for altruists) 2013-02-28T05:00:34.000Z · score: 3 (3 votes)
Why might the future be good? 2013-02-27T05:00:49.000Z · score: 1 (1 votes)
Replaceability 2013-01-22T05:00:52.000Z · score: 0 (0 votes)

Comments

Comment by paul_christiano on How worried should I be about a childless Disneyland? · 2019-10-31T20:44:45.070Z · score: 6 (3 votes) · EA · GW

My main point was that in any case what matters are the degree of alignment of the AI systems, and not their consciousness. But I agree with what you are saying.

If our plan for building AI depends on having clarity about our values, then it's important to achieve such clarity before we build AI---whether that's clarity about consciousness, population ethics, what kinds of experience are actually good, how to handle infinities, weird simulation stuff, or whatever else.

I agree consciousness is a big ? in our axiology, though it's not clear if the value you'd lose from saying "only create creatures physiologically identical to humans" is large compared to all the other value we are losing from the other kinds of uncertainty.

I tend to think that in such worlds we are in very deep trouble anyway and won't realize a meaningful amount of value regardless of how well we understand consciousness. So while I may care about them a bit from the perspective of parochial values (like "is Paul happy?") I don't care about them much from the perspective of impartial moral concerns (which is the main perspective where I care about clarifying concepts like consciousness).

Comment by paul_christiano on How worried should I be about a childless Disneyland? · 2019-10-30T16:35:57.465Z · score: 14 (11 votes) · EA · GW

I don't think it matters that much (for the long-term) if the AI systems we build in the next century are conscious. What matters is how they think about what possible futures they can bring about.

If AI systems are aligned with us, but turned out not to be conscious or not very conscious, then they would continue this project of figuring out what is morally valuable and so bring about a world we'd regard as good (even though it likely contains very few minds that resemble either us or them).

If AI systems are conscious but not at all aligned with us, then why think that they would create conscious and flourishing successors?

So my view is that alignment is the main AI issue here (and reflecting well is the big non-AI issue), with questions about consciousness being in the giant bag of complex questions we should try to punt to tomorrow.

Comment by paul_christiano on Conditional interests, asymmetries and EA priorities · 2019-10-22T16:29:04.735Z · score: 8 (4 votes) · EA · GW
Only Actual Interests: Interests provide reasons for their further satisfaction, but neither an interest nor its satisfaction provides reasons for the existence of that interest over its nonexistence.
It follows from this that a mind with no interests at all is no worse than a mind with interests, regardless of how satisfied its interests might have been. In particular, a joyless mind with no interest in joy is no worse than one with joy. A mind with no interests isn't much of a mind at all, so I would say that this effectively means it's no worse for the mind to not exist.

If you make this argument that "it's no worse for the joyful mind to not exist," you can make an exactly symmetrical argument that "it's not better for the suffering mind to not exist." If there was a suffering mind they'd have an interest in not existing, and if there was a joyful mind they'd have an interest in existing.

In either case, if there is no mind then we have no reason to care about whether the mind exists, and if there is a mind then we have a reason to act---in one case we prefer the mind exist, and in the other case we prefer the mind not exist.

To carry your argument you need an extra principle along the lines of "the existence of unfulfilled interests is bad." Of course that's what's doing all the work of the asymmetry---if unfulfilled interests are bad and fulfilled interests are not good, then existence is bad. But this has nothing to do with actual interests, it's coming from very explicitly setting the zero point at the maximally fulfilled interest.

Comment by paul_christiano on Conditional interests, asymmetries and EA priorities · 2019-10-22T16:20:26.216Z · score: 4 (2 votes) · EA · GW
A question here is whether "interests to not suffer" are analogous to "interests in experiencing joy." I believe that Michael's point is that, while we cannot imagine suffering without some kind of interest to have it stop (at least in the moment itself), we can imagine a mind that does not care for further joy.

I don't think that's the relevant analogy though. We should be comparing "Can we imagine suffering without an interest in not having suffered?" to "Can we imagine joy without an interest in having experienced joy?"

Let's say I see a cute squirrel and it makes me happy. Is it bad that I'm not in virtual reality experiencing the greatest joys imagineable?

I can imagine saying "no" here, but if I do then I'd also say it's not good that you are not in a virtual reality experiencing great suffering. If you were in a virtual reality experiencing great joy it would be against your interests to prevent that joy, and if you were in a virtual reality experiencing great suffering it would be in your interests to prevent that suffering.

You could say: the actually existing person has an interest in preventing future suffering, while they may have no interest in experiencing future joy. But now the asymmetry is just coming from the actual person's current interests in joy and suffering, so we didn't need to bring in all of this other machinery, we can just directly appeal to the claimed asymmetry in interests.

Comment by paul_christiano on Conditional interests, asymmetries and EA priorities · 2019-10-22T03:59:15.498Z · score: 12 (7 votes) · EA · GW
suffering by its very definition implies an interest in its absence, so there is a reason to prevent it.

If a mind exists and suffers, we'd think it better had it not existed (by virtue of its interest in not suffering). And if a mind exists and experiences joy, we'd think it worse had it not existed (by virtue of its interest in experiencing joy). Prima facie this seem exactly symmetrical, at least as far as the principles laid out here are concerned.

Depending on exactly how you make your view precise, I'd think that we'd either end up not caring at all about whether new minds exist (since if they didn't exist there'd be no relevant interests), or balancing the strength of those interests in some way to end up with a "zero" point where we are indifferent (since minds come with interests in both directions concerning their own existence). I don't yet see how you end up with the asymmetric view here.

Comment by paul_christiano on Altruistic equity allocation · 2019-10-17T15:28:33.017Z · score: 3 (2 votes) · EA · GW
would there be a specific metric (e.g. estimated QALYs saved) or would donors construct individual conversion rates (at least implicitly) based on their evaluations of how effective charities are likely to be over their lifetimes?

It would come down to donor predictions, and different donors will generally have quite different predictions (similar to for-profit investing). I agree there is a further difference where donors will also value different outputs differently.

One other advantage of not quantizing the individual contributions of employees is that they can sum up to more than 100% - all twenty employees of an organisation may each believe that they are responsible for at least 10% of its success, which is mathematically inconsistent but may be a useful fiction (and in some sense it could be true - there may be threshold effects such that if any individual employee left the impact of the organisation would actually be 10% worse) - if impact equity is explicitly parceled out, everyone's fractions will sum to 1.

I mostly consider this an advantage of quantifying :)

(I also think that impacts should sum to 1, not >1---in the sense that a project is worthwhile iff there is a way of allocating its impact that makes everyone happy, modulo the issue where you may need to separate impact into tranches for unaligned employees who value different parts of that impact.)

However, it might also lead to discontent if employees don't consider the impact equity allocations to be fair (whether between different employees, between employees and founders, or between employees and investors).

This seems like a real downside.

Comment by paul_christiano on The Future of Earning to Give · 2019-10-14T15:42:37.837Z · score: 33 (9 votes) · EA · GW
Of course, you could enter a donor lottery and, if you win, just give it all to an EA fund without doing any research yourself. I don't know if this would be better or worse than just donating directly to the EA funds.

It seems to me like this is unlikely to be worse. Is there some mechanism you have in mind? Risk-aversion for the EA fund? (Quantitatively that seems like it should matter very little at the scale of $100,000.)

At a minimum, it seems like the EA funds are healthier if their accountability is to a smaller number of larger donors who are better able to think about what they are doing.

In terms of upside from getting to think longer, I don't think it's at all obvious that most donors would decide on EA funds (or on whichever particular EA fund they initially lean towards). And as a norm, I think it's easy for EAs to argue that donor lotteries are an improvement over what most non-EA donors do, while the argument for EA funds comes down a lot to personal trust.

I don't think the argument for economies of scale really applies here, since the grantmakers are already working full-time on research in the areas they're making grants for.

I don't think all of the funds have grantmakers working fulltime on having better views about grantmaking. That said, you can't work fulltime if you win a $100,000 lottery either. I agree you are likely to come down to deciding whose advice to trust and doing meta-level reasoning.

Comment by paul_christiano on Are we living at the most influential time in history? · 2019-09-15T22:46:33.132Z · score: 43 (19 votes) · EA · GW

I think the outside view argument for acceleration deserves more weight. Namely:

  • Many measures of "output" track each other reasonably closely: how much energy we can harness, how many people we can feed, GDP in modern times, etc.
  • Output has grown 7-8 orders of magnitude over human history.
  • The rate of growth has itself accelerated by 3-4 orders of magnitude. (And even early human populations would have seemed to grow very fast to an observer watching the prior billion years of life.)
  • It's pretty likely that growth will accelerate by another order of magnitude at some point, given that it's happened 3-4 times before and faster growth seems possible.
  • If growth accelerated by another order of magnitude, a hundred years would be enough time for 9 orders of magnitude of growth (more than has occurred in all of human history).
  • Periods of time with more growth seem to have more economic or technological milestones, even if they are less calendar time.
  • Heuristics like "the next X years are very short relative to history, so probably not much will happen" seem to have a very bad historical track record when X is enough time for lots of growth to occur, and so it seems like a mistake to call them the "outside view."
  • If we go a century without doubling of growth rates, it will be (by far) the most that output has ever grown without significant acceleration.
  • Data is noisy and data modeling is hard, but it is difficult to construct a model of historical growth that doesn't have a significant probability of massive growth within a century.
  • I think the models that are most conservative about future growth are those where stable growth is punctuated by rapid acceleration during "revolutions" (with the agricultural acceleration around 10,000 years ago and the industrial revolution causing continuous acceleration from 1600-1900).
  • On that model human history has had two revolutions, with about two orders of magnitude of growth between them, each of which led to >10x speedup of growth. It seems like we should have a significant probability (certainly >10%) of another revolution occurring within the next order of magnitude of growth, i.e. within the next century.
Comment by paul_christiano on Ought: why it matters and ways to help · 2019-07-29T16:35:01.505Z · score: 10 (6 votes) · EA · GW

In-house.

Comment by paul_christiano on Age-Weighted Voting · 2019-07-15T15:45:14.972Z · score: 4 (2 votes) · EA · GW
I suspect many people responding to surveys about events which happened 10-30 years ago would be doing so with the aim of influencing the betting markets which affect near future policy.

It would be good to focus on questions for which that's not so bad, because our goal is to measure some kind of general sentiment in the future---if in the future people feel like "we should now do more/less of X" then that's pretty correlated with feeling like we did too little in the past (obviously not perfectly---we may have done too little 30 years ago but overcorrected 10 years ago---but if you are betting about public opinion in the US I don't think you should ever be thinking about that kind of distinction).

E.g. I think this would be OK for:

  • Did we do too much or too little about climate change?
  • Did we have too much or too little immigration of various kinds?
  • Were we too favorable or too unfavorable to unions?
  • Were taxes too high or too low?
  • Is compensating organ at market rates a good idea?

And so forth.

Comment by paul_christiano on Age-Weighted Voting · 2019-07-12T16:37:38.710Z · score: 71 (28 votes) · EA · GW

I like the goal of politically empowering future people. Here's another policy with the same goal:

  • Run periodic surveys with retrospective evaluations of policy. For example, each year I can pick some policy decisions from {10, 20, 30} years ago and ask "Was this policy a mistake?", "Did we do too much, or too little?", and so on.
  • Subsidize liquid prediction markets about the results of these surveys in all future years. For example, we can bet about people in 2045's answers to "Did we do too much or too little about climate change in 2015-2025?"
  • We will get to see market odds on what people in 10, 20, or 30 years will say about our current policy decisions. For example, people arguing against a policy can cite facts like "The market expects that in 20 years we will consider this policy to have been a mistake."

This seems particularly politically feasible; a philanthropist can unilaterally set this up for a few million dollars of surveys and prediction market subsidies. You could start by running this kind of poll a few times; then opening a prediction market on next year's poll about policy decisions from a few decades ago; then lengthening the time horizon.

(I'd personally expect this to have a larger impact on future-orientation of policy, if we imagine it getting a fraction of the public buy-in that would be required for changing voting weights.)

Comment by paul_christiano on Age-Weighted Voting · 2019-07-12T16:16:14.019Z · score: 31 (13 votes) · EA · GW
It would mitigate intertemporal inconsistency

If different generations have different views, then it seems like we'll have an same inconsistency when we shift power from one generation to the next regardless of when we do it. Under your proposal the change happens when the next generation turns 18-37, but doesn't seem to be lessened. For example, the brexit inconsistency would have been between 20 years ago and today rather than between today and 20 years from now, but it would have been just as large.

In fact I'd expect age-weighting to have more temporal inconsistency overall: in the status quo you average out idiosyncratic variation over multiple generations and swap out 1/3 of people every 20 years, while in your proposal you concentrate most power in a single generation which you completely change every 20 years.

Age and wisdom: [...] As a counterargument, crystallised intelligence increases with age and, though fluid intelligence decreases with age, it seems to me that crystallised intelligence is more important than fluid intelligence for informed voting. 

Another counterargument: older people have also seen firsthand the long-run consequences of one generation's policies and have more time to update about what sources of evidence are reliable. It's not clear to me whether this is a larger or smaller impact than "expect to live through the consequences of policies." I think folk wisdom often involves deference to elders specifically on questions about long-term consequences.

(I personally think that I'm better at picking policies at 30 than 20, and expect to be better still at 40.)

Comment by paul_christiano on Confused about AI research as a means of addressing AI risk · 2019-03-17T00:26:18.096Z · score: 6 (3 votes) · EA · GW

Consumers care somewhat about safe cars, and if safety is mostly an externality then legislators may be willing to regulate it, and there are only so many developers and if the moral case is clear enough and the costs low enough then the leaders might all make that investment.

At the other extreme, if you have no idea how to build a safe car, then there is no way that anyone is going to use a safe car no matter how much people care. Success is a combination of making safety easy and getting people to care / regulating / etc.

Here is the post I wrote about this.

If you have "competitive" solutions, then the required social coordination may be fairly mild. As a stylized example, if the leaders in the field are willing to invest in safety, then you could imagine surviving a degree of non-competitiveness in line with the size of their lead (though the situation is a bit messier than that).

Comment by paul_christiano on If slow-takeoff AGI is somewhat likely, don't give now · 2019-01-31T02:12:50.310Z · score: 16 (5 votes) · EA · GW
The current price of these companies is already determined by cutthroat competition between hyper-informed investors. If Warren Buffett or Goldman Sachs thinks the market is undervaluing these AI companies, then they'll spend billions bidding up the stock price until they're no longer undervalued.

That sounds like a nice world, but unfortunately I don't think that the market is quite that efficient. (Like the parent, I'm not going to offer any evidence, just express my view.)

You could reply, "then why ain'cha rich?" but it doesn't really work quantitatively for mispricings that would take 10+ years to correct. You could instead ask "then why ain'cha several times richer than you otherwise would be?" but lots of people are in fact several times richer than they otherwise would be after a lifetime of investment. It's not anything mind-blowing or even obvious to an external observer.

"Don't try to beat the market" still seems like a good heuristic, I just think this level of confidence in the financial system is misplaced and "hyper-informed" in particular is really overstating it. (As is "incredibly high prior" elsewhere.)

(ETA: I also agree that if you think you have a special insight about AI, there are likely to be better things to do with it.)

Comment by paul_christiano on If slow-takeoff AGI is somewhat likely, don't give now · 2019-01-31T02:05:04.328Z · score: 7 (2 votes) · EA · GW

The same neglect that potentially makes AI investments a good deal can also make AI philanthropy a better deal. If there is a huge AI boom, a prescient investment in AI companies might leave you with a larger share of the world economy---but you'll probably still be a much smaller share of total dollars directed at influencing AI.

That said, I do think this is a reasonable default thing to do with dollars if you are interested in the long term but unimpressed with the current menu of long-termist philanthropy (or expect to be better-informed in the future).

Comment by paul_christiano on Announcing an updated drawing protocol for the EffectiveAltruism.org donor lotteries · 2019-01-25T18:20:31.614Z · score: 4 (3 votes) · EA · GW

Trusting random.org doesn't seem so bad (probably a bit better than trusting IRIS, since IRIS isn't in the business of claiming to be non-manipulable). I don't know if they support arbitrary winning probabilities for draws, but probably there is some way to make it work.

(That does seem strictly worse than hashing powerball numbers though, which seem more trustworthy than random.org and easier to get.)

Comment by paul_christiano on Announcing an updated drawing protocol for the EffectiveAltruism.org donor lotteries · 2019-01-25T18:01:53.688Z · score: 2 (1 votes) · EA · GW

I'm not sure what the myriad of more responsible ways are. If you trust CEA to not mess with the lottery more than you trust IRIS not to change their earthquake reports to mess with the lottery, then just having CEA pick numbers out of a hat could be better.

It definitely seems like free-riding on some other public lottery drawing that people already trust might be better.

Comment by paul_christiano on Announcing an updated drawing protocol for the EffectiveAltruism.org donor lotteries · 2019-01-25T17:54:59.160Z · score: 3 (2 votes) · EA · GW

There is plenty of entropy in the API responses, that's not the worst concern.

I think the most serious question is whether a participant can influence the lottery draw (e.g. by getting IRIS to change low order digits of the reported latitude or longitude).

Comment by paul_christiano on How to improve EA Funds · 2018-04-14T01:39:28.025Z · score: 4 (4 votes) · EA · GW

In general I feel like donor lotteries should be preferred as a default over small donations to EA funds (winners can ultimately donate to EA funds if they decide that's the best option).

What are the best arguments in favor of EA funds as a recommendation over lotteries? Looking more normal?

(Currently there are no active lotteries, this is not a recommendation for short-term donations.)

Comment by paul_christiano on Economics, prioritisation, and pro-rich bias   · 2018-01-06T20:23:52.817Z · score: 1 (1 votes) · EA · GW

This standard of betterness is all you need to conclude: "every inefficient outcome is worse than some efficient outcome."

Comment by paul_christiano on Economics, prioritisation, and pro-rich bias   · 2018-01-06T20:21:44.898Z · score: 2 (2 votes) · EA · GW

If they endorsed the view you say they do with respect to scalping, wouldn't they say "provided there was perfectly equitable distribution of incomes, scalping ensures that goods go to those who value them most". Missing out the first bit gives an extremely misleading impression of their view, doesn't it?

When economists say "how much do you value X" they are usually using the dictionary definition of value as "estimate the monetary worth." Economists understand that valuing something involves an implicit denominator and "who values most" will depend on the choice of denominator. You get approximately the same ordering for any denominator which can be easily transferred between people, and when they say "A values X more than B" they mean in that common ordering. Economists understand that that sense of value isn't synonymous with moral value (which can't be easily transferred between people).

The reason that easily transferrable goods serve as a good denominator is because at the optimal outcome they should exactly track whatever the planner cares about (otherwise we could transfer them).

Expressing economists' actual view would take several additional sentences. The quote seems like a reasonable concise simplification.

Your version isn't true: an equitable distribution of incomes doesn't imply that everyone has roughly the same utility per marginal dollar. A closer formulation would be "Supposing that the policy-maker is roughly indifferent between giving a dollar to each person [e.g. as would be the case if the policy-maker has adopted roughly optimal policies in other domains, since dollars can be easily transferred between people] then scalping will ensure that the ticket goes to the person who the policy-maker would most prefer have it."

Immediately before your quote from Mankiw's book, he says "Equity involves normative judgments that go beyond the realm of economics and enter into the realm of political philosophy. We concentrate on efficiency as the social planner's goal. Keep in mind, however, that real policy-makers often care about equity as well." I agree the discussion is offensively simplified because it's a 101 textbook, but don't think this is evidence of fundamental confusion. If we read "equity" as "has the same marginal utility from a dollar" then this seems pretty in line with the utilitarian position.

Comment by Paul_Christiano on [deleted post] 2018-01-05T09:58:00.100Z

It's on my blog. I don't think the scheme works, and in general it seems any scheme introduces incentives to not look like a beneficiary. If I were to do this now, I would just run a prediction market on the total # of donations, have the match success level go from 50% to 100% over the spread, and use a small fraction of proceeds to place N buy and sell orders against the final book.

Comment by paul_christiano on Economics, prioritisation, and pro-rich bias   · 2018-01-03T18:11:59.212Z · score: 3 (3 votes) · EA · GW

Economists who accept your crucial premise would necessarily think that there should be no redistribution at all, since the net effect of redistribution is to move goods from people who were originally willing to pay more to people who were originally willing to pay less. But "redistribution is always morally bad" is an extreme outlier view amongst economists.

See for example the IGM poll on the minimum wage, where there is significant support for small increases to the minimum wage despite acknowledgment of the allocative inefficiency. The question most economists ask is "is this an efficient way to redistribute wealth? do the benefits justify the costs?" They don't consider the case settled because it decreases allocative efficiency (as it obviously does).

I don't think it would be that hard to find lots of examples of economists defending particular policies on the basis that those willing to pay more should get the good.

People can make that argument as part of a broader principle like "we should give goods to people who are willing to pay most, and redistribute money in the most efficient way we can."

For example, I also often argue that the people willing to pay more should get the good. But I don't accept your crucial premise even a tiny bit. The same is true of the handful of economists I've taken a class from or interacted with at length, and so I'd guess it's the most common view.

Comment by paul_christiano on Economics, prioritisation, and pro-rich bias   · 2018-01-03T18:04:02.967Z · score: 5 (5 votes) · EA · GW

Obviously what is optimal does depend on what we can compel the producer to do; if we can collect taxes, that will obviously be better. If we can compel the producer to suffer small costs to make the world better, there are better things to compel them to do. If we can create an environment in which certain behaviors are more expensive for the producer because they are socially unacceptable, there are better things to deem unacceptable. And so on.

More broadly, as a society we want to pick the most efficient ways to redistribute wealth, and as altruists we'd like to use our policy influence in the most efficient ways to redistribute wealth. Forcing the tickets to sell below market value is an incredibly inefficient way to redistribute wealth. So it can be a good idea in worlds where there are almost no options, but seems very unlikely to be a good idea in practice.

Comment by paul_christiano on Economics, prioritisation, and pro-rich bias   · 2018-01-03T09:24:04.046Z · score: 2 (2 votes) · EA · GW

In actual fact, they are appealing to preference utilitarianism. This is a moral theory.

Economists are quite often appealing to a much simpler account of betterness: if everyone prefers option A to option B, then option A is better than option B.

Comment by paul_christiano on Economics, prioritisation, and pro-rich bias   · 2018-01-03T09:13:52.706Z · score: 6 (6 votes) · EA · GW

Here is a stronger version of the pro-market-price argument:

  • The producer could sell a ticket for $1000 to Rich and then give $950 to Pete. This leaves both Rich and Pete better off, often very substantially.
  • In reality, Pete is not an optimal target for philanthropy, and so the producer could do even better by selling the ticket for $1000 to Rich and then giving to their preferred charity.
  • No matter what the producer wants, they can do better by selling the ticket at market price. And no matter what we want as advocates for a policy, we can do better by allowing them to. (In fact the world is complicated and it's not this clean, but that seems orthogonal to your objection.)

This is still not the strongest argument that can be made, but it's better than the argument from your crucial premise. I think there are few serious economists who accept your crucial premise in the way you mean it, though many might use it as a definition of welfare (but wouldn't consider total welfare synonymous with moral good).

Comment by paul_christiano on Announcing the 2017 donor lottery · 2017-12-22T04:58:47.521Z · score: 2 (2 votes) · EA · GW

What are the biggest upsides of transparency?

The actual value of the information produced seems modest.

Comment by paul_christiano on Announcing the 2017 donor lottery · 2017-12-18T06:41:14.972Z · score: 0 (0 votes) · EA · GW

You have diminishing returns to money, i.e. your utility vs. money curve is curved down. So a gamble with mean 0 has some cost to you, approximately (curvature) * (variance), that I was referring to as the cost-via-risk. This cost is approximately linear in the variance, and hence quadratic in the block size.

Comment by paul_christiano on Announcing the 2017 donor lottery · 2017-12-17T19:21:15.000Z · score: 6 (8 votes) · EA · GW

A $200k lottery has about 4x as much cost-via-risk as a $100k lottery. Realistically I think that smaller sizes (with the option to lottery up further) are significantly better than bigger pots. As the pot gets bigger you need to do more and more thinking to verify that the risk isn't an issue.

If you were OK with variable pot sizes, I think the thing to do would be:

  • The lottery will be divided up into blocks.
  • Each block will have have the same size, which will be something between $75k and $150k.
  • We provide a backstop only if the total donation is < $75k. Otherwise, we just divide the total up into chunks between $75k and $150k, aiming to be about $100k.
Comment by paul_christiano on Effective Altruism Grants project update · 2017-10-01T16:33:18.144Z · score: 2 (2 votes) · EA · GW

However, I suspect that this intuition was biased (upward), because I more often think in terms of "non-EA money". In non-EA money, CEA time would have a much higher nominal value. But if you think EA money can be used to buy good outcomes very cost-effectively (even at the margin) then $75 could make sense.

Normally people discuss the value of time by figuring out how many dollars they'd spend to save an hour. It's kind of unusual to ask how many dollars you'd have someone else spend so that you save an hour.

Comment by paul_christiano on Capitalism and Selfishness · 2017-09-16T03:17:54.292Z · score: 3 (3 votes) · EA · GW

Finally, capitalism requires a sufficiently self-interested culture such that it can sustain compounding capital accumulation through the sale of ever-greater commodities.

This is a common claim, but seems completely wrong. An economy of perfectly patient agents will accumulate capital much faster than a community that consumes 50% of its output. The patient agents will invest in infrastructure and technology and machines and so on to increase their future wealth.

The capitalists have to maximise productivity through technological innovation, wage repression, and so forth, or they are run into the ground and bankrupted by market competition

In an efficient market, the capitalists earn rents on their capital whatever they do.

Comment by paul_christiano on Nothing Wrong With AI Weapons · 2017-08-29T17:16:46.418Z · score: 9 (7 votes) · EA · GW

That sounds a lot more expensive than bullets. You can already kill someone for a quarter.

The main cost of killing someone with a bullet is labor. The point is that autonomous weapons reduce the labor required.

alter the balance of power between different types of groups in a specific way.

New technologies do often decrease the cost of killing people and increase the number of civilians who can be killed by a group of fixed size (see: guns, explosives, nuclear weapons).

Comment by paul_christiano on Nothing Wrong With AI Weapons · 2017-08-29T04:50:34.939Z · score: 10 (8 votes) · EA · GW

The two arguments I most often hear are:

  • Cheap autonomous weapons could greatly decrease the cost of ending life---within a decade they could easily be the cheapest form of terrorism by far, and may eventually be the cheapest mass destruction in general. Think insect-sized drones carrying toxins or explosive charges that are lethal if detonated inside the skull.

  • The greater the military significance of AI, the more difficult it becomes for states to share information and coordinate regarding its development. This might be bad news for safety.

Comment by paul_christiano on Blood Donation: (Generally) Not That Effective on the Margin · 2017-08-06T18:38:41.910Z · score: 7 (7 votes) · EA · GW

This seems to confuse costs and benefits, I don't understand the analysis. (ETA: the guesstimate makes more sense.)

I'm going to assume that a unit of blood is the amount that a single donor gives in a single session. (ETA: apparently a donation is 0.5 units of red blood cells. The analysis below is correct only if red blood cells are 50% of the value of a donation. I have no idea what the real ratio is. If red blood cells are most of the value, adjust all the values downwards by a factor of 2.)

The cost of donating a unit is perhaps 30 minutes (YMMV), and has nothing to do with 120 pounds. (The cost from having less blood for a while might easily dwarf the time cost, I'm not sure. When I've donated the time cost was significantly below 30 minutes.)

Under the efficient-NHS hypothesis, the value of marginal blood to the healthcare system is 120 pounds. We can convert this to QALYs using the marginal rate of (20,000 pounds / QALY), to get 0.6% of a QALY.

If you value all QALYs equally and think that marginal AMF donations buy them at 130 pounds / QALY, then your value for QALYs should be at most 130 pounds / QALY (otherwise you should just donate more). It should be exactly 130 pounds / QALY if you are an AMF donor (otherwise you should just donate less).

So 0.6% of a QALY should be worth about 0.8 pounds. If it takes 30 minutes to produce a unit of blood which is worth 0.6% of a QALY, then it should be producing value at 1.6 pounds / hour.

If the healthcare system was undervaluing blood by one order of magnitude, this would be 16 pounds / hour. So I think "would have to be undervaluing the effectiveness of blood donations by 2 orders of magnitude" is off by about an order of magnitude.

The reason this seems so inefficient has little to do with EA's quantitative mindset, and everything to do with the utilitarian perspective that all QALYs are equal. The revealed preferences of most EA's imply that they value their QALYs much more highly than those of AMF beneficiaries. Conventional morality suggests that people extend some of their concern for themselves to their peers, which probably leads to much higher values for marginal UK QALYs than for AMF beneficiary QALYs.

I think that for most EAs donating blood is still not worthwhile even according to (suitably quantitatively refined) common-sense morality. But for those who value their time at less than 20 pounds / hour and take the numbers in the OP seriously, I think that "common-sense" morality does strongly endorse donating blood. (Obviously this cutoff is based on my other quantitative views, which I'm not going to get into here).

(Note: I would not be surprised if the numbers in the post are wrong in one way or another, so don't really endorse taking any quantitative conclusions literally rather than as a prompt to investigate the issue more closely. That said, if you are able to investigate this question usefully I suspect you should be earning more than 20 pounds / hour.)

I'm very hesitant about EA's giving up on common-sense morality based on naive utilitarian calculations. In the first place, I don't think that most EA's moral reasoning is sufficiently sophisticated to outweigh simple heuristics like "when there are really big gains from trade, take them" (if society is willing to pay 240 pounds / hour for your time, and you value it at 16 pounds per hour, those are pretty big gains from trade). In the second place, even a naive utilitarian should be concerned that the rest of the world will be uncooperative with and unhappy with utilitarians if we are less altruistic than normal people in the ways that matter to our communities.

Comment by paul_christiano on My current thoughts on MIRI's "highly reliable agent design" work · 2017-07-11T16:04:41.197Z · score: 3 (3 votes) · EA · GW

On capability amplification:

MIRI's traditional goal would allow you to break cognition down into steps that we can describe explicitly and implement on transistors, things like "perform a step of logical deduction," "adjust the probability of this hypothesis," "do a step of backwards chaining," etc. This division does not need to be competitive, but it needs to be reasonably close (close enough to obtain a decisive advantage).

Capability amplification requires breaking cognition down into steps that humans can implement. This decomposition does not need to be competitive, but it needs to be efficient enough that it can be implemented during training. Humans can obviously implement more than transistors, the main difference is that in the agent foundations case you need to figure out every response in advance (but then can have a correspondingly greater reason to think that the decomposition will work / will preserve alignment).

I can talk in more detail about the reduction from (capability amplification --> agent foundations) if it's not clear whether it is possible and it would have an effect on your view.

On competitiveness:

I would prefer be competitive with non-aligned AI, rather than count on forming a singleton, but this isn't really a requirement of my approach. When comparing difficulty of two approaches you should presumably compare the difficulty of achieving a fixed goal with one approach or the other.

On reliability:

On the agent foundations side, it seems like plausible approaches involve figuring out how to peer inside the previously-opaque hypotheses, or understanding what characteristic of hypotheses can lead to catastrophic generalization failures and then excluding those from induction. Both of these seem likely applicable to ML models, though would depend on how exactly they play out.

On the ML side, I think the other promising approaches involve either adversarial training, ensembling / unanimous votes, which could be applied to the agent foundations problem.

Comment by paul_christiano on My current thoughts on MIRI's "highly reliable agent design" work · 2017-07-10T17:37:42.458Z · score: 10 (10 votes) · EA · GW

I agree with this basic point, but I think on the other side there is a large gap in concreteness that makes makes it much easier to usefully criticize my approach (I'm at the stage of actually writing pseudocode and code which we can critique).

So far I think that the problems in my approach will also appear for MIRI's approach. For example:

  • Solomonoff induction or logical inductors have reliability problems that are analogous to reliability problems for machine learning. So to carry out MIRI's agenda either you need to formulate induction differently, or you need to somehow solve these problems. (And as far as I can tell, the most promising approaches to this problem apply both to MIRI's version and the mainstream ML version.) I think Eliezer has long understood this problem and has alluded to it, but it hasn't been the topic of much discussion (I think largely because MIRI/Eliezer have so many other problems on their plates).
  • Capability amplification requires breaking cognitive work down into smaller steps. MIRI's approach also requires such a breakdown. Capability amplification is easier in a simple formal sense (that if you solve the agent foundations you will definitely solve capability amplification, but not the other way around).
  • I've given some concrete definitions of deliberation/extrapolation, and there's been public argument about whether they really capture human values. I think CEV has avoided those criticisms not because it solves the problem, but because it is sufficiently vague that it's hard to criticize along these lines (and there are sufficiently many other problems that this one isn't even at the top of the list). If you want to actually give a satisfying definition of CEV, I feel you are probably going to have to go down the same path that started with this post. I suspect Eliezer has some ideas for how to avoid these problems, but at this point those ideas have been subject to even less public discussion than my approach.

I agree there are further problems in my agenda that will be turned up by my discussion. But I'm not sure there are fewer such problems than for the MIRI agenda, since I think that being closer to concreteness may more than outweigh the smaller amount of discussion.

If you agree that many of my problems also come up eventually for MIRI's agenda, that's good news about the general applicability of MIRI's research (e.g. the reliability problems for Solomonoff induction may provide a good bridge between MIRI's work and mainstream ML), but I think it would also be a good reason to focus on the difficulties that are common to both approaches rather than to problems like decision theory / self-reference / logical uncertainty / naturalistic agents / ontology identification / multi-level world models / etc.

Comment by paul_christiano on My current thoughts on MIRI's "highly reliable agent design" work · 2017-07-08T16:05:16.373Z · score: 8 (8 votes) · EA · GW

You might think that "learning to reason from humans" doesn't accomplish (2) because it makes the AI human-limited. If we want an advanced AI to help us create the kind of world that humans would want "if we knew more, thought faster, were more the people we wished we were" etc. then the approval of actual humans might, at some point, cease to be helpful.

A human can spend an hour on a task, and train an AI to do that task in milliseconds.

Similarly, an aligned AI can spend an hour on a task, and train its successor to do that task in milliseconds.

So you could hope to have a sequence of nice AI's, each significantly smarter than the last, eventually reaching the limits of technology while still reasoning in a way that humans would endorse if they knew more and thought faster.

(This is the kind of approach I've outlined and am working on, and I think that most work along the lines of "learn from human reasoning" will make a similar move.)

Comment by Paul_Christiano on [deleted post] 2017-05-01T14:08:32.579Z

Scott links to this study, which is more convincing. They measure the difference between "physical mild (slap, spank)" and "physical harsh (use weapon, punch, kick)" punishment, with ~10% of children in the latter category. They consider children of twins to control for genetic confounders, and find something like a 0.2 SD effect on measures of behavioral problems at age 25. There is still confounding (e.g. households where parents beat their kids may be worse in other ways), and the effects are smaller and for rarer forms of punishment, but it is getting somewhere.

Comment by Paul_Christiano on [deleted post] 2017-05-01T00:22:04.938Z

The reported correlations between physical punishment and life outcomes, which underlie the headline $3.6 trillion / year figure, seem unlikely to be causal. I only clicked on the first study, but it made very little effort to control for any of the obvious confounders. (The two relevant controls are mother's education and presence of the father.) The confounding is sufficiently obvious and large that the whole exercise seems kind of crazy. On top of that, as far as I can tell, a causal effect of this size would be inconsistent with adoption studies.

It would be natural to either start with the effect on kids' welfare, which seems pretty easy to think about, or else make a much more serious effort to actually figure out the long-term effects.

Comment by paul_christiano on Utopia In The Fog · 2017-03-28T22:53:56.236Z · score: 4 (4 votes) · EA · GW

If you drop the assumption that the agent will be all-powerful and far beyond human intelligence then a lot of AI safety work isn't very applicable anymore, while it increasingly needs to pay attention to multi-agent dynamics

I don't think this is true in very many interesting cases. Do you have examples of what you have in mind? (I might be pulling a no-true-scotsman here, and I could imagine responding to your examples with "well that research was silly anyway.")

Whether or not your system is rebuilding the universe, you want it to be doing what you want it to be doing. Which "multi-agent dynamics" do you think change the technical situation?

the claim isn't that evolution is intrinsically "against" any particular value, it's that it's extremely unlikely to optimize for any particular value, and the failure to do so nearly perfectly is catastrophic

If evolution isn't optimizing for anything, then you are left with the agents' optimization, which is precisely what we wanted. I though you were telling a story about why a community of agents would fail to get what they collectively want. (For example, a failure to solve AI alignment is such a story, as is a situation where "anyone who wants to destroy the world has the option," as is the security dilemma, and so forth.)

Yes, or even implementable in current systems.

We are probably on the same page here. We should figure out how to build AI systems so that they do what we want, and we should start implementing those ideas ASAP (and they should be the kind of ideas for which that makes sense). When trying to figure out whether a system will "do what we want" we should imagine it operating in a world filled with massive numbers of interacting AI systems all built by people with different interests (much like the world is today, but more).

The point you are quoting is not about just any conflict, but the security dilemma and arms races. These do not significantly change with complete information about the consequences of conflict.

You're right.

Unsurprisingly, I have a similar view about the security dilemma (e.g. think about automated arms inspections and treaty enforcement, I don't think the effects of technological progress are at all symmetrical in general). But if someone has a proposed intervention to improve international relations, I'm all for evaluating it on its merits. So maybe we are in agreement here.

Comment by paul_christiano on Utopia In The Fog · 2017-03-28T16:34:18.160Z · score: 12 (12 votes) · EA · GW

It's great to see people thinking about these topics and I agree with many of the sentiments in this post. Now I'm going to write a long comment focusing on those aspects I disagree with. (I think I probably agree with more of this sentiment than most of the people working on alignment, and so I may be unusually happy to shrug off these criticisms.)

Contrasting "multi-agent outcomes" and "superintelligence" seems extremely strange. I think the default expectation is a world full of many superintelligent systems. I'm going to read your use of "superintelligence" as "the emergence of a singleton concurrently with the development of superintelligence."

I don't consider the "single superintelligence" scenario likely, but I don't think that has much effect on the importance of AI alignment research or on the validity of the standard arguments. I do think that the world will gradually move towards being increasingly well-coordinated (and so talking about the world as a single entity will become increasingly reasonable), but I think that we will probably build superintelligent systems long before that process runs its course.

The future looks broadly good in this scenario given approximately utilitarian values and the assumption that ems are conscious, with a large growing population of minds which are optimized for satisfaction and productivity, free of disease and sickness.

On total utilitarian values, the actual experiences of brain emulations (including whether they have any experiences) don't seem very important. What matters are the preferences according to which emulations shape future generations (which will be many orders of magnitude larger).

"freewheeling evolutionary developments, while continuing to produce complex and intelligent forms of organization, lead to the gradual elimination of all forms of being that we care about"

Evolution doesn't really select against what we value, it just selects for agents that want to acquire resources and are patient. This may cut away some of our selfish values, but mostly leaves unchanged our preferences about distant generations.

(Evolution might select for particular values, e.g. if it's impossible to reliably delegate or if it's very expensive to build systems with stable values. But (a) I'd bet against this, and (b) understanding this phenomenon is precisely the alignment problem!)

(I discuss several of these issues here, Carl discusses evolution here.)

Whatever the type of agent, arms races in future technologies would lead to opportunity costs in military expenditures and would interfere with the project of improving welfare. It seems likely that agents designed for security purposes would have preferences and characteristics which fail to optimize for the welfare of themselves and their neighbors. It’s also possible that an arms race would destabilize international systems and act as a catalyst for warfare.

It seems like you are paraphrasing a standard argument for working on AI alignment rather than arguing against it. If there weren't competitive pressure / selection pressure to adopt future AI systems, then alignment would be much less urgent since we could just take our time.

There may be other interventions that improve coordination/peace more broadly, or which improve coordination/peace in particular possible worlds etc., and those should be considered on their merits. It seems totally plausible that some of those projects will be more effective than work on alignment. I'm especially sympathetic to your first suggestion of addressing key questions about what will/could/should happen.

Not only is this a problem on its own, but I see no reason to think that the conditions described above wouldn’t apply for scenarios where AI agents turned out to be the primary actors and decisionmakers rather than transhumans or posthumans.

Over time it seems likely that society will improve our ability to make and enforce deals, to arrive at consensus about the likely consequences of conflict, to understand each others' situations, or to understand what we would believe if we viewed others' private information.

More generally, we would like to avoid destructive conflict and are continuously developing new tools for getting what we want / becoming smarter and better-informed / etc.

And on top of all that, the historical trend seems to basically point to lower and lower levels of violent conflict, though this is in a race with greater and greater technological capacity to destroy stuff.

I would be more than happy to bet that the intensity of conflict declines over the long run. I think the question is just how much we should prioritize pushing it down in the short run.

“the only way to avoid having all human values gradually ground down by optimization-competition is to install a Gardener over the entire universe who optimizes for human values.”

I disagree with this. See my earlier claim that evolution only favors patience.

I do agree that some kinds of coordination problems need to be solved, for example we must avoid blowing up the world. These are similar in kind to the coordination problems we confront today though they will continue to get harder and we will have to be able to solve them better over time---we can't have a cold war each century with increasingly powerful technology.

There is still value in AI safety work... but there are other parts of the picture which need to be explored

This conclusion seems safe, but it would be safe even if you thought that early AI systems will precipitate a singleton (since one still cares a great deal about the dynamics of that transition).

Better systems of machine ethics which don’t require superintelligence to be implemented (as coherent extrapolated volition does)

By "don't require superintelligence to be implemented," do you mean systems of machine ethics that will work even while machines are broadly human level? That will work even if we need to solve alignment prior long before the emergence of a singleton? I'd endorse both of those desiderata.

I think the main difference in alignment work for unipolar vs. multipolar scenarios is how high we draw the bar for "aligned AI," and in particular how closely competitive it must be with unaligned AI. I probably agree with your implicit claim, that they either must be closely competitive or we need new institutional arrangements to avoid trouble.

Rather than having a singleminded focus on averting a particular failure mode

I think the mandate of AI alignment easily covers the failure modes you have in mind here. I think most of the disagreement is about what kinds of considerations will shape the values of future civilizations.

both working on arguments that agents will be linked via a teleological thread where they accurately represent the value functions of their ancestors

At this level of abstraction I don't see how this differs from alignment. I suspect the details differ a lot, in that the alignment community is very focused on the engineering problem of actually building systems that faithfully pursue particular values (and in general I've found that terms like "teleological thread" tend to be linked with persistently low levels of precision).

Comment by paul_christiano on Donor lottery details · 2017-03-25T23:03:36.795Z · score: 4 (4 votes) · EA · GW

I owe Michael Nielsen $60k to donate as he pleases if [beacon.nist.gov](beacon.nist.gov/home] is between 0000000000... and 028F5C28F5... at noon PST on 2017/4/2.

Comment by paul_christiano on What Should the Average EA Do About AI Alignment? · 2017-03-01T02:27:27.967Z · score: 7 (7 votes) · EA · GW

You don't try to prevent nuclear disaster by making friendly nuclear missiles, you try to keep them out of the hands of nefarious or careless agents or provide disincentives for building them in the first place.

The difficulty of the policy problem depends on the quality of our technical solutions: how large an advantage can you get by behaving unsafely? If the answer is "you get big advantages for sacrificing safety, and a small group behaving unsafely could cause a big problem" then we have put ourselves in a sticky situation and will need to conjure up some unusually effective international coordination.

A perfect technical solution would make the policy problem relatively easy---if we had a scalable+competitive+secure solution to AI control, then there would be minimal risk from reckless actors. On the flip side, a perfect policy solution would make the technical problem relatively easy since we could just collectively decide not to build any kind of AI that could cause trouble. In reality we are probably going to need both.

(I wrote about this here.)

You could hold the position that the advantages from building uncontrolled AI will predictably be very low even without any further work. I disagree strongly with that and think that it contradicts the balance of public argument, though I don't know if I'd call it "easily corrected."

Comment by paul_christiano on Principia Qualia: blueprint for a new cause area, consciousness research with an eye toward ethics and x-risk · 2017-01-20T18:58:42.593Z · score: 1 (1 votes) · EA · GW

Ah, that makes a lot more sense, sorry for misinterpreting you. (I think Toby has a view closer to the one I was responding to, though I suspect I am also oversimplifying his view.)

I agree that there are important philosophical questions that bear on the goodness of building various kinds of (unaligned) AI, and I think that those questions do have impact on what we ought to do. The biggest prize is if it turns out that some kinds of unaligned AI are much better than others, which I think is plausible. I guess we probably have similar views on these issues, modulo me being more optimistic about the prospects for aligned AI.

I don't think that an understanding of qualia is an important input into this issue though.

For example, from a long-run ethical perspective, whether or not humans have qualia is not especially important, and what mostly matters is human preferences (since those are what shape the future). If you created a race of p-zombies that nevertheless shared our preferences about qualia, I think it would be fine. And "the character of human preferences" is a very different kind of object than qualia. These questions are related in various ways (e.g. our beliefs about qualia are related to our qualia and to philosophical arguments about consciousness), but after thinking about that a little bit I think it is unlikely that the interaction is very important.

To summarize, I do agree that there are time-sensitive ethical questions about the moral value of creating unaligned AI. This was item 1.2 in this list from 4 years ago. I could imagine concluding that the nature of qualia is an important input into this question, but don't currently believe that.

Comment by paul_christiano on Donor lotteries: demonstration and FAQ · 2016-12-31T03:32:26.957Z · score: 3 (3 votes) · EA · GW

It looks like the total will be around $50k, so I'm going to reduce the cut to 0.5%.

Comment by paul_christiano on Principia Qualia: blueprint for a new cause area, consciousness research with an eye toward ethics and x-risk · 2016-12-20T03:20:18.884Z · score: 2 (2 votes) · EA · GW

(effectively) prematurely settling on a utility function whose goodness depends heavily on the nature of qualia

This feels extremely unlikely; I don't think we have plausible paths to obtaining a non-negligibly good outcome without retaining the ability to effectively deliberate about e.g. the nature of qualia. I also suspect that we will be able to solve the control problem, and if we can't then it will be because of failure modes that can't be avoided by settling on a utility function. Of course "can't see any way it can happen" is not the same as "am justifiably confident it won't happen," but I think in this case it's enough to get us to pretty extreme odds.

More precisely, I'd give 100:1 against: (a) we will fail to solve the control problem in a satisfying war, (b) we will fall back to a solution which depends on our current understanding of qualia, (c) the resulting outcome will be non-negligibly good according to our view about qualia at the time that we build AI, and (d) it will be good because we hold that view about qualia.

(My real beliefs might be higher than 1% just based on "I haven't thought about it very long" and peer disagreement. But I think it's more likely than not that I would accept a bet at 100:1 odds after deliberation, even given that reasonable people disagree.)

(By non-negligibly good I mean that we would be willing to make some material sacrifice to improve its probability compared to a barren universe, perhaps of $1000/1% increase. By because I mean that the outcome would have been non-negligibly worse according to that view if we had not held it.)

I'm not sure if there is any way to turn the disagreement into a bet. Perhaps picking an arbiter and looking at their views in a decade? (e.g. Toby, Carl Schulman, Wei Dai?) This would obviously involve less extreme odds.

Probably more interesting than betting is resolving the disagreement. This seems to be a slightly persistent disagreement between me and Toby, I have never managed to really understand his position but we haven't talked about it much. I'm curious about what kind of solutions you see as plausible---it sounds like your view is based on a more detailed picture rather than an "anything might happen" view.

Comment by paul_christiano on Contra the Giving What We Can pledge · 2016-12-20T03:11:03.753Z · score: 5 (5 votes) · EA · GW

I think that donor lotteries are a considerably stronger argument than GiveWell for the claim "donating 10% doesn't have to be time-consuming."

Your argument (with GiveWell in place of a lottery) requires that either (a) you think that GiveWell charities are clearly the best use of funds, or (b) by "doesn't have to be time-consuming" you mean "if you don't necessarily want to do the most good." I don't think you should be confused about why someone would disagree with (a), nor about why someone would think that (b) is a silly usage.

If there were low-friction donor lotteries, I suspect that most small GiveWell donors would be better-served by gambling up to perhaps $1M and then thinking about it at considerably greater length. I expect a significant fraction of them would end up funding something other than GiveWell top charities.

(I was originally supportive but kind of lukewarm about donor lotteries, but I think I've now come around to Carl's level of enthusiasm.)

Comment by paul_christiano on Contra the Giving What We Can pledge · 2016-12-20T03:00:02.087Z · score: 2 (2 votes) · EA · GW

I assume this discussion is mostly aimed at people outside of CEA who are considering whether to take and help promote the pledge. I think there are many basic points which those people should probably understand but which CEA (understandably) isn't keen to talk about, and it is reasonable for people outside of CEA to talk about them instead.

I expect this discussion wasn't worth the time at any rate, but it seems like sharing it with CEA isn't really going to save time on net.

Comment by paul_christiano on Contra the Giving What We Can pledge · 2016-12-20T02:47:51.018Z · score: 6 (6 votes) · EA · GW

Secondly: An "evil future you" who didn't care about the good you can do through donations probably wouldn't care much about keeping promises made by a different kind of person in the past either, I wouldn't think.

[...] there's no point having a commitment device to prompt you to follow through on something you don't think you should do

Usually we promise to do something that we would not have done otherwise, i.e. which may not be in line with our future self's interests. The promise "I will do X if my future self wants to" is gratuitous.

When I promise to do something I will try to do it, even if my preferences change. Perhaps you are reading "evil" as meaning "lacks integrity" rather than "is not altruistic," but in context that doesn't make much sense.

It seems reasonable for GWWC to say that the GWWC pledge is intended more as a statement of intent than as a commitment; it would be interesting to understand whether this is how most people who come into contact with GWWC perceive the pledge. If there is systematic misperception, it seems like the appropriate response is "oops, sorry" and to fix the misperception.

Thirdly: The coordination thing doesn't really matter here because you are only 'cooperating' with your future self, who can't really reject you because they don't exist yet (unlike another person who is deciding whether to help you).

It does not seem to me that the main purpose of taking the GWWC pledge, nor its main effect, is to influence the pledger's behavior.

Comment by paul_christiano on Donor lotteries: demonstration and FAQ · 2016-12-11T00:08:18.392Z · score: 3 (3 votes) · EA · GW

Note that this is now being implemented by donation swapping, so small donors don't have to put in any extra work.