Posts

Expected value theory is fanatical, but that's a good thing 2020-09-21T08:48:23.496Z · score: 49 (24 votes)
Why not give 90%? 2020-03-23T02:53:54.938Z · score: 50 (26 votes)
Job opportunity at the Future of Humanity Institute and Global Priorities Institute 2018-04-01T13:13:35.423Z · score: 4 (21 votes)
New climate change report from Giving What We Can 2016-04-19T15:29:26.329Z · score: 4 (6 votes)

Comments

Comment by haydenw on Expected value theory is fanatical, but that's a good thing · 2020-10-01T07:27:43.047Z · score: 5 (2 votes) · EA · GW

Yep, we've got pretty good evidence that our spacetime will have infinite 4D volume and, if you arranged happy lives uniformly across that volume, we'd have to say that the outcome is better than any outcome with merely finite total value. Nothing logically impossible there (even if it were practically impossible).

That said, assigning value "∞" to such an outcome is pretty crude and unhelpful. And what it means will depend entirely on how we've defined ∞ in our number system. So, what I think we should do in such a case is not say V equals such and such. Instead, ditch the value function when you've left the domain where it works. Instead, just deal with your set of possible outcomes, your lotteries (probability measures over that set), and a betterness relation which might sometimes follow a value function but might also extend to outcomes beyond the function's domain. That's what people tend to do in the infinite aggregation literature (including the social choice papers that consider infinite time horizons), and for good reason.

Comment by haydenw on Expected value theory is fanatical, but that's a good thing · 2020-10-01T07:19:52.040Z · score: 3 (1 votes) · EA · GW

That'd be fine for the paper, but I do think we face at least some decisions in which EV theory gets fanatical. The example in the paper - Dyson's Wager - is intended as a mostly realistic such example. Another one would be a Pascal's Mugging case in which the threat was a moral one. I know I put P>0 on that sort of thing being possible, so I'd face cases like that if anyone really wanted to exploit me. (That said, I think we can probably overcome Pascal's Muggings using other principles.)

Comment by haydenw on Expected value theory is fanatical, but that's a good thing · 2020-10-01T07:15:17.439Z · score: 3 (1 votes) · EA · GW

Thanks!

Good point about Minimal Tradeoffs. But there is a worry that if you don't make it a fixed r then you could have an infinite sequence of decreasing rs but they don't go arbitrarily low. (e.g., 1, 3/4, 5/8, 9/16, 17/32, 33/64, ...)

I agree that Scale-Consistency isn't as compelling as some of the other key principles in there. And, with totalism, it could be replaced with the principle you suggest in which multiplication is just duplicating the world k. Assuming totalism, that'd be a weaker claim, which is good. I guess one minor worry is that, if we reject totalism, duplicating a world k times wouldn't scale its value by k. So Scale-Consistency is maybe the better principle for arguing in greater generality. But yeah, not needed for totalism.


>Nor can they say that Lsafe plus an additional payoff b is better than Lrisky plus the same b. 
They can't say this for all b, but they can for some b, right? Aren't they saying exactly this when they deny Fanaticism ("If you deny Fanaticism, you know that no matter how your background uncertainty is resolved, you will deny that Lrisky plus b is better than Lsafe plus b.")? Is this meant to follow from Lrisky+B≻Lsafe+B? I think that's what you're trying to argue after, though.

Nope, wasn't meaning for the statement involving little b to follow from the one about big B. b is a certain payoff, while B is a lottery. When we add b to either lottery, we're just adding a constant to all of the payoffs. Then, if lotteries can be evaluated by their cardinal payoffs, we've got to say that L_1 +b > L_2 +b iff L_1 > L_2.

Aren't we comparing lotteries, not definite outcomes? Your vNM utility function could be arctan(∑iui), where the function inside the arctan is just the total utilitarian sum. Let Lsafe=π2, and Lrisky=∞ with probability 0.5 (which is not small, but this is just to illustrate) and 0 otherwise. Then these have the same expected value without a background payoff (or b=0), but with b>0, the safe option has higher EV, while with b<0, the risky option has higher EV.

Yep, that utility function is bounded, so using it and EU theory will avoid Fanaticism and bring on this problem. So much the worse for that utility function, I reckon.

And, in a sense, we're not just comparing lotteries here. L_risky + B is two independent lotteries summed together, and we know in advance that you're not going to affect B at all. In fact, it seems like B is the sort of thing you shouldn't have to worry about at all in your decision-making. (After all, it's a bunch of events off in ancient India or in far distant space, outside your lightcone.) In the moral setting we're dealing with, it seems entirely appropriate to cancel B from both sides of the comparison and just look at L_risky and L_safe, or to conditionalise the comparison on whatever B will actually turn out as: some b. That's roughly what's going on there.

Comment by haydenw on Expected value theory is fanatical, but that's a good thing · 2020-10-01T06:54:19.041Z · score: 3 (2 votes) · EA · GW

Just a note on the Pascal's Mugging case: I do think the case can probably be overcome by appealing to some aspect of the strategic interaction between different agents. But I don't think it comes out of the worry that they'll continue mugging you over and over. Suppose you (morally) value losing $5 to the mugger at -5 and losing nothing at 0 (on some cardinal scale). And you value losing every dollar you ever earn in your life at -5,000,000. And suppose you have credence (or, alternatively, evidential probability) of p that the mugger can and will generate any among of moral value or disvalue they claim they will. Then, as long as they claim they'll bring about an outcome worse than -5,000,000/p if you don't give them $5, or they claim they'll bring about an outcome better than +5,000,000/p if you do, then EV theory says you should hand it over. And likewise for any other fanatical theory, if the payoff is just scaled far enough up or down.

Comment by haydenw on Expected value theory is fanatical, but that's a good thing · 2020-10-01T06:48:03.446Z · score: 3 (1 votes) · EA · GW

Yes, in practice that'll be problematic. But I think we're obligated to take both possible payoffs into account. If we do suspect the large negative payoffs, it seems pretty awful to ignore them in our decision-making. And then there's a weird asymmetry if we pay attention to the negative payoffs but not the positive.

More generally, Fanaticism isn't a claim about epistemology. A good epistemic and moral agent should first do their research, consider all of the possible scenarios in which their actions backfire, and put appropriate probabilities on them. If as they do the epistemic side right, it seems fine for them to act according to Fanaticism when it comes to decision-making. But in practice, yeah, that's going to be an enormous 'if'.

Comment by haydenw on Expected value theory is fanatical, but that's a good thing · 2020-10-01T06:40:28.685Z · score: 3 (1 votes) · EA · GW

Both cases are traditionally described in terms of payoffs and costs just for yourself, and I'm not sure we have quite as strong a justification for being risk-neutral or fanatical in that case. In particular, I find it at least a little plausible that individuals should effectively have bounded utility functions, whereas it's not at all plausible that we're allowed to do that in the moral case - it'd lead something a lot like the old Egyptology objection.

That said, I'd accept Pascal's wager in the moral case. It comes out of Fanaticism fairly straightforwardly, with some minor provisos. But Pascal's Mugging seems avoidable - for it to arise, we need another agent interacting with you strategically to get what they want. I think it's probably possible for an EV maximiser to avoid the mugging as long as we make their decision-making rule a bit richer in strategic interactions. But that's just speculation - I don't have a concrete proposal for that!

Comment by haydenw on Why not give 90%? · 2020-03-27T22:56:51.808Z · score: 3 (3 votes) · EA · GW

This is pretty off-topic, sorry.

Comment by haydenw on Why not give 90%? · 2020-03-25T11:22:47.758Z · score: 2 (2 votes) · EA · GW
I think this is actually quite a complex question.

Definitely! I simplified it a lot in the post.

If the chance is high enough, it may in fact be prudent to front-load your donations, so that you can get as much out of yourself with your current values as possible.

Good point! I hadn't thought of this. I think it ends up being best to frontload if your annual risk of giving up isn't very sensitive to the amount you donate, it's high, and your income isn't going to increase a whole lot over your lifetime. I think those first two things might be true of a lot of people. And so will the third thing, effectively, if your income doesn't increase by more than 2-3x.

If we take the data from here with 0 grains of salt, you're actually less likely to have value drift at 50% of income (~43.75% chance of value drift) than 10% (~63.64% of value drift). There are many reasons this might be, such as consistency and justification effects, but the point is the object level question is complicated :).

My guess is that the main reason for that is that more devoted people tend to pledge higher amounts. I think if you took some of those 10%ers and somehow made them choose to switch to 50%, they'd be far more likely than before to give up.

But yeah, it's not entirely clear that P(giving up) increases with amount donated, or that either causally affects the other. I'm just going by intuition on that.

Comment by haydenw on Why not give 90%? · 2020-03-23T23:30:08.066Z · score: 4 (2 votes) · EA · GW
In reality, if we can figure out how to give a lot for one or two years without becoming selfish, we are more likely to sustain that for a longer period of time. This boosts the case for making larger donations.

Yep, I agree. In general, the real-life case is going to be more complicated in a bunch of ways, which tug in both directions.

Still, I suspect that, even if someone managed to donate a lot for a few years, there'd still be some small independent risk of giving up each year. And even a small such risk cuts down your expected lifetime donations by quite a bit: e.g., a 1% p.a. risk of giving up for 37 years cuts down the expected value by 16% (and far more if your income increases over time).

Moreover, I rather doubt that the probability of turning selfish and giving up on Effective Altruism can be nearly as high as 50% in a given year. If it were that high, I think we'd have more evidence of it, in spite of the typical worries regarding how we can hear back from people who aren't interested anymore.

Yep, that seems right. Certainly at the 10% donation level, it should be a lot lower than 50% (I hope!). I was thinking of 50% p.a. as the probability of giving up after ramping up to 90% per year, at least in my own circumstances (living on a pretty modest grad student stipend).

Also, there's a little bit of relevant data on this in this post. Among the 38 people that person surveyed, the dropout rate was >50% over 5 years. So it's pretty high at least. But not clear how much of that was due to feeling it was too demanding and then getting demotivated, rather than value drift.

Also, this doesn't break your point, but I think percentages are the wrong way to think about this. In reality, donations should be much more dependent upon local cost of living than upon your personal salary. If COL is $40k and you make $50k then donate up to $10k. If COL is $40k and you make $200k then donate up to $160k.

Yes, good point! I'd agree that that's a better way to look at it, especially for making broad generalisations over different people.

Comment by haydenw on Why not give 90%? · 2020-03-23T23:05:10.349Z · score: 3 (2 votes) · EA · GW
The assumption that if she gives up, she is most likely to give up on donating completely seems not obvious to me. I would think that it's more likely she scales back to a lower level, which would change the conclusion.

Yep, I agree that that's probably more likely. I focused on giving up completely to keep things simple. But if it's even somewhat likely (say, 1% p.a.), that may make a far bigger dent in your expected lifelong donations than do risks of giving up partially.

Perhaps we should be encouraging a strategy where people increase their percentage donated by a few percentage points per year until they find the highest sustainable level for them. Combined with a community norm of acceptance for reductions in amounts donated, people could determine their highest sustainable donation level while lowering risk of stopping donations entirely.

That certainly sounds sensible to me!

Comment by haydenw on Problems with EA representativeness and how to solve it · 2018-08-05T13:18:53.113Z · score: 5 (8 votes) · EA · GW

I'd add one more: having to put your resources towards more speculative, chancy causes is more demanding.

When donating our money and time to something like bednets, the cost is mitigated by the personal satisfaction of knowing that we've (almost certainly) had an impact. When donating to some activity which has only a tiny chance of success (e.g., x-risk mitigation), most of us won't get quite the same level of satisfaction. And that's pretty demanding to have to give up not only a large chunk of your resources but also the satisfaction of having actually achieved something.

Rob Long has written a bit about this - https://experiencemachines.wordpress.com/2018/06/10/demanding-gambles/

Comment by haydenw on Job opportunity at the Future of Humanity Institute and Global Priorities Institute · 2018-04-06T11:23:23.652Z · score: 1 (1 votes) · EA · GW

Sorry about that, I hadn't seen that thread. Consider me well and truly chastened!

Comment by haydenw on New climate change report from Giving What We Can · 2016-04-26T09:00:27.038Z · score: 0 (0 votes) · EA · GW

Hi Sam,

Thanks! Glad you liked it. It's currently just a preview and not actually published yet, so that's why some links and functionality may not work (and the post on the model I used is still yet to go up).

In regards to Q1 - I would like to, yeah. When it comes to the probabilities of different levels of warming though, it's super uncertain. The ~1% chance of 10 degrees of warming is only under one of several possible probability distributions and we really just don't have any clue which of those distributions is accurate. And in addition to the uncertainty there, we know very little about just how bad those high levels of warming would be for us as there's minimal research on it, so giving expected values would be a major challenge - one which I'm not sure I'm up to, but which definitely warrants some more research in future. There's also the values and risk profiles of different donors to think of too - many want direct measurable benefits rather than minor reductions in existential risk somewhere in the future - so even if we got a decent estimate of the expected impact of emission reduction including tail risks, it'd have to be given separately.

Q2 - Largely the same issue as above - seriously difficult to estimate. But as far as mentioning them, that's certainly something that can be added in before we publish. Cheers for that!

Q3 - Yeah, so reducing present emissions just temporarily won't really help much (unless it gives us longer to adapt). But when I'm talking about emissions reduction, I mean permanently preventing that quantity of emissions from ever being emitted (e.g. through deforestation). Reducing (or preventing) emissions in this way should not only delay the point at which we reach a given temperature but also reduce the eventual peak temperature. And Michael's spot on with his reply too (we haven't looked at methane emissions specifically here but it does seem like it might be possible to reduce methane emissions at roughly $5/tCO2eq through ACE's recommended animal charities - highly uncertain though).

Q4 - I've also just finished an evaluation of the most promising lobbying organisation we've found. It should be up sometime soon. We think it might be a slightly better option for donors with a greater appetite for risk, but for others it still seems like Cool Earth is the better option.

Q5 - The US. That seems pretty certain, as they're not only a massive emitter (2nd worldwide) but it's pretty widely accepted that a lot of action elsewhere won't happen if the US doesn't get the ball rolling. This'll get mentioned in that other evaluation though.

Q6 - I'm not sure. It probably wouldn't be a bad use of most people's time, and the advocacy charity we've been looking at does use a lot of volunteers. Then again, they're currently getting hundreds of thousands of volunteer-hours each year already so it might be more effective to volunteer elsewhere or in a different cause area. I really don't know though.

Comment by haydenw on New climate change report from Giving What We Can · 2016-04-20T10:52:38.802Z · score: 1 (1 votes) · EA · GW

Also, if anyone wants to comment on particular parts of the report, you can also comment directly in the original Google docs.

Main report: https://docs.google.com/document/d/1dZ_82IImZ5iGJ56hOydExurOfYY2zRITlXBRcaSs5v8/edit?usp=sharing Cool Earth Evaluation: https://docs.google.com/document/d/1_QDKPhPB9l1pQbESDiyOmEpPYc1oEVEP3wf7Q5aqjnw/edit?usp=sharing