Posts

AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement 2020-03-17T02:39:11.791Z · score: 66 (21 votes)
The value of money going to different groups 2017-05-16T13:11:45.984Z · score: 20 (15 votes)

Comments

Comment by toby_ord on The emerging school of patient longtermism · 2020-08-11T09:39:45.752Z · score: 21 (9 votes) · EA · GW

This is a very nice explanation Ben.

For the record, while I'm perhaps the most prominent voice in EA for our time being one of the most influential there will ever be, I'm also very sympathetic to this approach. For instance, my claim is that this key time period has already been going for 75 years and can't last more than a small number of centuries. This is quite compatible with more important times being 100 years away, and with the arguments that investing for long periods like that could provide a large increase in the expected impact of the resources (even if the time they were spent was not more influential). And of course, I might be wrong about the importance of this time. So I am excited to see more work exploring patient longtermism.

Comment by toby_ord on Shapley values: Better than counterfactuals · 2019-12-02T11:39:02.085Z · score: 22 (14 votes) · EA · GW

While I think the Shapley value can be useful, there are clearly cases where the counterfactual value is superior for an agent deciding what to do. Derek Parfit clearly explains this in Five Mistakes in Moral Mathematics. He is arguing against the 'share of the total view' and but at least some of the arguments also apply to the Shapley value too (which is basically an improved version of 'share of the total'). In particular, the best things you have listed in favour of the Shapley value applied to making a moral decision correctly apply when you and others are all making the decision 'together'. If the others have already committed to their part in a decision, the counterfactual value approach looks better.

e.g. on your first example, if the other party has already paid their $1000 to P, you face a choice between creating 15 units of value by funding P or 10 units by funding the alternative. Simple application of Shapley value says you should do the action that creates 10 units, predictably making the world worse.

One might be able to get the best of both methods here if you treat cases like this where another agent has already committed to a known choice as part of the environment when calculating Shapley values. But you need to be clear about this. I consider this kind of approach to be a hybrid of the Shapley and counterfactual value approaches, with Shapley only being applied when the other agents' decisions are still 'live'. As another example, consider your first example and add the assumption that the other party hasn't yet decided, but that you know they love charity P and will donate to it for family reasons. In that case, the other party's decision, while not yet made, is not 'live' in the relevant sense and you should support P as well.

If you are going to pursue what the community could gain from considering Shapley values, then look into cases like this and subtleties of applying the Shapley value further — and do read that Parfit piece.

Comment by toby_ord on Are we living at the most influential time in history? · 2019-09-19T10:01:40.556Z · score: 11 (6 votes) · EA · GW

I don't have time to get into all the details, but I think that while your intuition is reasonable (I used to share it) the maths does actually turn out my way. At least on one interpretation of what you mean. I looked into this when wondering if the doomsday argument suggested that the EV of the future must be small. Try writing out the algebra for a Gott style prior that there is an x% chance we are in the first x%, for all x. You get a Pareto distribution that is a power law with infinite mean. While there is very little chance on this prior that there is a big future ahead, the size of each possible future compensates for that, such that each order of magnitude of increasing size of the future contributes an equal expected amount of population to the future, such that the sum is infinite.

I'm not quite sure what to make of this, and it may be quite brittle (e.g. if we were somehow certain that there weren't more than 10^100 people in the future, the expected population wouldn't be all that high), but as a raw prior I really think it is both an extreme outside view, saying we are equally likely to live at any relative position in the sequence *and* that there is extremely high (infinite) EV in the future -- not because it thinks there is any single future whose EV is high, but because the series diverges.

This isn't quite the same as your claim (about influence), but does seem to 'save existential risk work' from this challenge based on priors (I don't actually think it needed saving, but that is another story).

Comment by toby_ord on Are we living at the most influential time in history? · 2019-09-16T10:50:31.544Z · score: 15 (10 votes) · EA · GW

Thanks for this very thorough reply. There are so many strands here that I can't really hope to do justice to them all, but I'll make a few observations.

1) There are two versions of my argument. The weak/vague one is that a uniform prior is wrong and the real prior should decay over time, such that you can't make your extreme claim from priors. The strong/precise one is that it should decay as 1/n^2 in line with a version of LLS. The latter is more meant as an illustration. It is my go-to default for things like this, but my main point here is the weaker one. It seems that you agree that it should decay, and that the main question now is whether it does so fast enough to make your prior-based points moot. I'm not quite sure how to resolve that. But I note that from this position, we can't reach either your argument that from priors this is way too unlikely for our evidence to overturn (and we also can't reach my statement of the opposite of that).

2) I wouldn't use the LLS prior for arbitrary superlative properties where you fix the total population. I'd use it only if the population over time was radically unknown (so that the first person is much more likely to be strongest than the thousandth, because there probably won't be a thousand) or where there is a strong time dependency such that it happening at one time rules out later times.

3) You are right that I am appealing to some structural properties beyond mere superlatives, such as extinction or other permanent lock-in. This is because these things happening in a century would be sufficient for that century to have a decent chance of being the most influential (technically this still depends on the influenceability of the event, but I think most people would grant that conditional on next century being the end of humanity, it is no longer surprising at all if this or next century were the most influential). So I think that your prior setting approach proves too much, telling us that there is almost no chance of extinction or permanent lock-in next century (and even after updating on evidence). This feels fishy. A bit like Bostrom's 'presumptuous philosopher' example. I think it looks even more fishy in your worked example where the prior is low precisely because of an assumption about how long we will last without extinction: especially as that assumption is compatible with, say, a 50% chance of extinction in the next century. (I don't think this is a knockdown blow here: but I'm trying to indicate the part of your argument I think would be most likely to fall and roughly why).

4) I agree there is an issue to do with too many hypotheses . And a related issue with what is the first timescale on which to apply a 1/2 chance of the event occurring. I think these can be dealt with together. You modify the raw LLS prior by some other kind of prior you have for each particular type of event (which you need to have since some are sub-events of others and rationality requires you to assign lower probability to them). You could operationalise this by asking over what time frame you'd expect a 1/2 chance of that event occurring. Then LLS isn't acting as an indifference principle, but rather just as a way of keeping track of how to update your ur prior in light of how many time periods have elapsed without the event occurring. I think this should work out somewhat similarly, just with a stretched PDF that still decays as 1/n^2, but am not sure. There may be a literature on this.

Comment by toby_ord on Are we living at the most influential time in history? · 2019-09-16T10:18:42.947Z · score: 4 (4 votes) · EA · GW

I'm sympathetic to the mixture of simple priors approach and value simplicity a great deal. However, I don't think that the uniform prior up to an arbitrary end point is the simplest as your comment appears to suggest. e.g. I don't see how it is simpler than an exponential distribution with an arbitrary mean (which is the max entropy prior over R+ conditional on a finite mean). I'm not sure if there is a max entropy prior over R+ without the finite mean assumption, but 1/x^2 looks right to me for that.

Also, re having a distribution that increases over a fixed time interval giving a peak at the end, I agree that this kind of thing is simple, but note that since we are actually very uncertain over when that interval ends, that peak gets very smeared out. Enough so that I don't think there is a peak at the end at all when the distribution is denominated in years (rather than centiles through human history or something). That said, it could turn into a peak in the middle, depending on the nature of one's distribution over durations.

Comment by toby_ord on Are we living at the most influential time in history? · 2019-09-16T10:06:00.848Z · score: 6 (5 votes) · EA · GW

I don't think I'm building in any assumptions about living extremely early -- in fact I think it makes as little assumption on that as possible. The prior you get from LLS or from Gott's doomsday argument says the median number of people to follow us is as many as have lived so far (~100 billion), that we have an equal chance of being in any quantile, and so for example we only have a 1 in a million chance of living in the first millionth. (Though note that since each order of magnitude contributes an equal expected value and there are infinitely many orders of magnitude, the expected number of people is infinite / has no mean.)

Comment by toby_ord on Are we living at the most influential time in history? · 2019-09-16T09:57:50.337Z · score: 3 (3 votes) · EA · GW

You are right that having a fuzzy starting point for when we started drawing from the urn causes problems for Laplace's Law of Succession, making it less appropriate without modification. However, note that in terms of people who have ever lived, there isn't that much variation as populations were so low for so long, compared to now.

I see your point re 'arbitrary superlatives', but am not sure it goes through technically. If I could choose a prior over the relative timescale of beginning to the final year of humanity, I would intuitively have peaks at both ends. But denominated in years, we don't know where the final year is and have a distribution over this that smears that second peak out over a long time. This often leaves us just with the initial peak and a monotonic decline (though not necessarily of the functional form of LLS). That said, this interacts with your first point, as the beginning of humanity is also vague, smearing that peak out somewhat too.

Comment by toby_ord on Are we living at the most influential time in history? · 2019-09-16T09:49:49.792Z · score: 2 (2 votes) · EA · GW

That's interesting. Earlier I suggested that a mixture of different priors that included some like mine would give a result very different to your result. But you are right to say that we can interpret this in two ways: as a mixture of ur priors or as a mixture of priors we get after updating on the length of time so far. I was implicitly assuming the latter, but maybe the former is better and it would indeed lessen or eliminate the effect I mentioned.

Your suggestion is also interesting as a general approach, choosing a distribution over these Beta distributions instead of debating between certainty in (0,0), (0.5, 0.5), and (1,1). For some distributions over Beta parameters these the maths is probably quite tractable. That might be an answer to the right meta-rational approach rather than an answer to the right rational approach, or something, but it does seem nicely robust.

Comment by toby_ord on Are we living at the most influential time in history? · 2019-09-10T09:16:17.709Z · score: 8 (4 votes) · EA · GW

Quite high. If you think it hasn't happened yet, then this is a problem for my prior that Will's doesn't have.

More precisely, the argument I sketched gives a prior whose PDF decays roughly as 1/n^2 (which corresponds to the chance of it first happening in the next period after n absences decaying as ~1/n). You might be able to get some tweaks to this such that it is less likely than not to happen by now, but I think the cleanest versions predict it would have happened by now. The clean version of Laplace's Law of Succession, measured in centuries, says there would only be a 1/2,001 chance it hadn't happened before now, which reflects poorly on the prior, but I don't think it quite serves to rule it out. If you don't know whether it has happened yet (e.g. you are unsure of things like Will's Axial Age argument), this would give some extra weight to that possibility.

Comment by toby_ord on Are we living at the most influential time in history? · 2019-09-06T20:57:46.608Z · score: 163 (65 votes) · EA · GW

Hi Will,

It is great to see all your thinking on this down in one place: there are lots of great points here (and in the comments too). By explaining your thinking so clearly, it makes it much easier to see where one departs from it.

My biggest departure is on the prior, which actually does most of the work in your argument: it creates the extremely high bar for evidence, which I agree probably couldn’t be met. I’ve mentioned before that I’m quite sure the uniform prior is the wrong choice here and that this makes a big difference. I’ll explain a bit about why I think that.

As a general rule if you have a domain like this that extends indefinitely in one direction, the correct prior is one that diminishes as you move further away in that direction, rather than picking a somewhat arbitrary end point and using a uniform prior on that. People do take this latter approach in scientific papers, but I think it is usually wrong to do so. Moreover in your case in particular, there are also good reasons to suspect that the chance of a century being the most influential should diminish over time. Especially because there are important kinds of significant event (such as the value lock-in or an existential catastrophe) where early occurrence blocks out later occurrence.

This directly leads to diminishing credence over time. e.g. if there is a known constant chance of such a key event happening in any century conditional on not happening before that time then the chance it first happens in any century diminishes exponentially as time goes on. Or if this chance is unknown and could be anything between zero and one, then instead of an exponential decline, it diminishes more slowly (analogous to Weitzman discounting). The most famous model of this is Laplace’s Law of Succession, where if your prior for the unknown contstant hazard rate per time period is uniform on the interval between 0 and 1, then the chance it happens in the nth period if it hasn’t before is 1/n+2 — a hyperbola. I think hazard rates closer to zero and one are more likely than those in between, so I prefer the bucket shaped Jeffrey’s prior (= Beta(0.5, 0.5) for the maths nerds out there), which gives a different hyperbola of 1/2n+2 (and makes my case a little bit harder than if I’d settled for the uniform prior).

A raw application of this would say that since Homo sapiens has been around for 2,000 centuries (without, let us suppose, having had such a one-off critical time yet), the chance it happens this century is 1 in 2,002 (or 1 in 4,002). [Actually I’ll just say 1 in 2,000 or (1 in 4,000), as the +2 is just an artefact of how we cut up the time periods and can be seen to go to zero when we use continuous time.] This is a lot more likely than your 1 in a million or 1 in 100,000. And it gets even more so when you run it in terms of persons or person years (as I believe you should). i.e. measure time with a clock that ticks as each lifetime ends, rather than one that ticks each second. e.g. about 1/20th of all people who have ever lived are alive now, so the next century it is not really 1/2,000th of human history but more like 1/20th of it. On this clock and with this prior, one would expect a 1/20 (or 1/40) chance of a pivotal event (first) occurring.

Note that while your model applied a kind of principle of indifference uniformly across time, saying each century was equally likely (a kind of outside view), my model makes similar sounding assumptions. It assumes that each century is equally likely to have such a high stakes pivotal event (conditional on it not already happening), and if you do the maths, this also corresponds to each order of magnitude of time having an equal (unconditional) chance of the the pivotal event happening in it (i.e. instead of equal chance in century 1, century 2, century 3… it is equal chance in centuries 1 to 10, centuries 10 to 100, centuries 100 to 1,000), which actually seems more intuitive to me. Then there is the wrinkle that I don’t assign it across clock time, but across persons or person-years (e.g. where I say ‘century’ your could read it as ‘1 trillion person years’). All these choices are inspired by very similar motivations to how you chose your prior.

[As an interesting side-note, this kind of prior is also what you get if you apply Richard Gott’s version of the Doomsday Argument to estimate how long we will last (say, instead of the toy model you apply), and this is another famous way of doing outside-view forecasting.]

I doubt I can easily convince you that the prior I’ve chosen is objectively best, or even that it is better than the one you used. Prior-choice is a bit of an art, rather like choice of axioms. But I hope you see that it does show that the whole thing comes down to whether you choose a prior like you did, or another reasonable alternative. My prior gives a prior chance of HoH of about 5% or 2.5%, which is thousands of times more likely than yours, and can easily be bumped up by the available evidence to probabilities >10%. So your argument doesn’t do well on sensitivity analysis over prior-choice. Additionally, if you didn’t know which of these priors to use and used a mixture with mine weighted in to a non-trivial degree, this would also lead to a substantial prior probability of HoH. And this is only worse if instead of using a 1/n hyperbola like I did, you had arguments that it declined more quickly, like 1/n^2 or an exponential. So it only goes through if you are very solidly committed to a prior like the one you used.

Comment by toby_ord on Are we living at the most influential time in history? · 2019-09-06T14:55:54.474Z · score: 10 (8 votes) · EA · GW

Thanks Pablo, I also didn't know he had claimed this at the very time he was introducing population ethics and extinction risk.

Comment by toby_ord on The value of money going to different groups · 2017-05-22T14:46:21.513Z · score: 4 (4 votes) · EA · GW

I think it is mainly from individuals' explicit preferences over hypothetical gambles for income streams. e.g. if you are indifferent between a sure salary of $50,000 PA and a 50-50 gamble between a salary of $25,000 or one of $100,000, then that fits logarithmic utility (eta = 1). Note that while people's intuitions about such cases are far from perfect (e.g. they will have status quo bias) this methodology is actually very similar to that of QALYs/DALYs. But I imagine all methods you mention are used. Also other methods such as happiness surveys give results in the same ballpark. If asking about ideal societal distribution, then that is actually a somewhat different question as there could be additional moral reasons in favour of equality or priority to the worst off on top of diminishing marginal utility effects. Eta is typically intended to set aside such issues, though there are other tests to measure those.

Comment by toby_ord on Celebrating All Who Are in Effective Altruism · 2016-01-21T11:45:26.012Z · score: 16 (16 votes) · EA · GW

The terms 'softcore EAs' and 'hardcore EAs' are simply terrible. I strongly urge people to use other words to talk about these groups.

Comment by toby_ord on Why the triviality objection to EA is beside the point · 2015-07-29T07:44:54.673Z · score: 3 (2 votes) · EA · GW

Thanks Stefan, this is a very good point.

Comment by toby_ord on The 2014 Survey of Effective Altruists: Results and Analysis · 2015-03-17T20:30:56.999Z · score: 3 (7 votes) · EA · GW

Thanks for sharing such detailed thoughts on this Greg. It is so useful to have people with significant domain expertise in the community who take the time to carefully explain their concerns.

Comment by toby_ord on Certificates of impact · 2014-11-20T11:12:36.658Z · score: 6 (5 votes) · EA · GW

You might be interested in:

http://en.wikipedia.org/wiki/Health_Impact_Fund

http://en.wikipedia.org/wiki/Social_impact_bond

Which are practical prize type solutions along similar lines.

Comment by toby_ord on Kidney donation is a reasonable choice for effective altruists and more should consider it · 2014-11-19T15:18:08.958Z · score: 4 (6 votes) · EA · GW

I'm inclined to agree with Ryan's argument here. One way I look at it is that I wouldn't donate a kidney in order to get $2,000 (whether that was to be spent on myself or donated to effective charities), or equivalently, that I am prepared to pay $2,000 to keep my second kidney. This means that, for me at least, donating is dominated by extra donations.

I am surprised that this comes out as close as it does though. If we didn't have quite so effective charities, kidney donation would be a great option.

Comment by toby_ord on An interactive webpage showing how to donate to effective charities tax-deductibly in any country · 2014-11-19T14:48:22.907Z · score: 3 (5 votes) · EA · GW

Thanks Tom, this looks great. I'd broaden it out to include Giving What We Can's recommended charities.

Comment by toby_ord on How to leave money to charity in your will: a simple guide · 2014-10-31T15:53:06.797Z · score: 8 (8 votes) · EA · GW

Thanks for writing this up! I should note that Giving What We Can also has a good way of doing this: you can leave the money to the Giving What We Can Trust. By default this will be allocated between the charities Giving What We Can recommends, but you can also specify other charities (in any country) that are global poverty related. Indeed, one can also make one's annual donations via the Giving What We Can Trust, allowing UK taxpayers to get Gift Aid on the entire donation, even if part or all of it goes to a charity abroad. I do this and I also find that it simplifies the giving process (I don't have to look up the donation pages for the six or so charities that I split my donations between). Since I've pretty much explained everything about the trust, I'll also mention that money donated into it legally cannot be used to support Giving What We Can itself.

Comment by toby_ord on Should Giving What We Can change its Pledge? · 2014-10-24T10:41:16.812Z · score: 8 (8 votes) · EA · GW

Thanks Gregory, that's a very helpful set of arguments.

Comment by toby_ord on Should Giving What We Can change its Pledge? · 2014-10-23T11:54:28.046Z · score: 7 (7 votes) · EA · GW

I don't think it is accurate to say that it includes 'many' MIRI donors. At least not compared to its total of 644 members. Note that MIRI was listed as the 42nd out of 43 listed charities in order of how much members have donated to them, which seems about as marginal as it could be. In addition, the list of charities that our members have donated to is not supposed to be any kind of endorsement of them by Giving What We Can. We allow members to donate their pledged amounts anywhere so long as it is a sincere interpretation of the pledge.

Comment by toby_ord on Should Giving What We Can change its Pledge? · 2014-10-23T10:41:15.233Z · score: 9 (9 votes) · EA · GW

I wouldn't see this as 'determination to push this through'. It is very much still in the information gathering stage.

Comment by toby_ord on Should Giving What We Can change its Pledge? · 2014-10-23T10:39:13.800Z · score: 4 (4 votes) · EA · GW

Note that you could certainly include contributions to R&D for infectious diseases as part of the existing GWWC pledge. GWWC doesn't have any recommendations in that area, but we certainly see it as a plausibly very effective way of helping. The same is presumably true of your other examples. Anything J-PAL or IPA promote as effective is probably well worth looking into. I personally donate to both J-PAL and IPA themselves.

Comment by toby_ord on Should Giving What We Can change its Pledge? · 2014-10-23T10:34:10.460Z · score: 6 (6 votes) · EA · GW

I don't think this need stop you from taking the pledge. We think of it like making a promise to do something. It is perfectly reasonable to promise to do something (say to pick up a friend's children from school) even if there is a chance you will have to pull out (e.g. if you got sick). We don't usually think of small foreseeable chances of having to pull out as a reason not to make promises, so I wouldn't worry about that here. I think this is mentioned on our FAQ page -- if not, it should be.

Another approach is to make sure you have enough health insurance (possibly supplementing your country's public insurance, though I don't think that is needed in the UK), and maybe getting income insurance too. It should be possible to have enough of both kind and still donate 10%.

Comment by toby_ord on Effective Altruism is a Question (not an ideology) · 2014-10-20T07:37:14.655Z · score: 4 (4 votes) · EA · GW

The term 'effective altruism' was created before the FB group, but I think Evan is referring to the fact that the FB group uses the 'ist' form rather than the 'ism' form and is the most prominent thing to do so. I think it would have been an improvement if it had used the 'ism' form (and it is not a co-incidence that this forum does).

Comment by toby_ord on Open Thread 3 · 2014-10-18T21:35:00.380Z · score: 5 (5 votes) · EA · GW

If you go to http://saos.fec.gov/saos/searchao? and search for "repledge" you will find the legal opinion the FEC gave to the people behind Repledge. It was evenly split 3-3 over whether it would count as a conduit or intermediary for campaign donations (which seems to not be allowed). This seems to be what made the Repledge people decide to stop what seemed to be a very successful launch (look for them on Youtube for example). Looking at this opinion could be useful if trying to do something like this. If you are serious about it, you may want to contact the person behind Repledge (Eric Zolt) for more details.

You may also want to read my paper: http://www.amirrorclear.net/academic/papers/moral-trade.pdf

Comment by toby_ord on Open Thread 3 · 2014-10-18T21:21:00.903Z · score: 0 (0 votes) · EA · GW

We certainly talk about this a lot at FHI and do a fair amount of research and policy work on it. CSER is also interested in synthetic biology risk. I agree that it is talked about a lot less in wider EA circles though.

Comment by toby_ord on Why is effective altruism new and obvious? · 2014-10-03T13:29:49.986Z · score: 5 (5 votes) · EA · GW

I agree with (1) and (3), but I don't think (2) played a large role. Regarding (1), I think that the conceptual development of QALYs (which DALYs largely copied) was as important as the randomisation, since it began to allow like for like comparisons across much wider areas.

Comment by toby_ord on On Media and Effective Altruism · 2014-10-03T13:23:39.472Z · score: 1 (1 votes) · EA · GW

I agree with this. I should clarify that the types of thing I am generally concerned about is coming off as too abrasive, too negative, too amateurish, or too associated with legal but disliked ideas that aren't part of our core considerations.

Comment by toby_ord on Lawyering to Give · 2014-09-30T14:16:23.529Z · score: 2 (2 votes) · EA · GW

Great work!

Comment by toby_ord on On Media and Effective Altruism · 2014-09-30T14:13:26.450Z · score: 3 (3 votes) · EA · GW

While I agree with a lot of what you wrote, I disagree about the 'all publicity is good if large enough' idea.

You are entirely correct that you can get some good help from people at one end of the curve, and at the start, this often feels like all that matters. For example, a company might think that if no-one currently knows about them, then all publicity is good as people can't reduce their purchasing from zero, but others might increase it. However if something is going to become reasonably big regardless of the coverage, then it can have bad effects. This is true even if one's own organisation is small, but the coverage can reflect badly on related organisations with similar goals (such as the rest of the EA movement).

Bad publicity and bad first impressions can last a long time, and people looking for sensation can quite easily trawl through past coverage looking for the one bad sensational thing. If something inadvertently damaged the reputation of effective altruism, that would be a bad effect. If the damage was very high, that would be a really terrible effect. Taking risks with this public good of the movement's reputation is something we should really discourage.

All of this means that as an organisation or movement starts to get bigger, it should become much more conservative about reputational issues like this, though exactly where to draw the line is unclear. For what its worth, your example of the Rhys Southan article seemed to me to be on the right side of the line, and the transhumanist one seemed to me to be roughly neutral.

Comment by toby_ord on Brainstorming thread: ideas for large EA funders · 2014-09-30T13:08:24.976Z · score: 2 (2 votes) · EA · GW

I'd be very interested in seeing a post from you on this. I don't think it is obvious that:

double especially effective altruists should be investing in prizes more on the margin

as it might be that the ideal investment in them at this stage is still zero, even if that would change later. The counterfactuals with prizes are actually quite hard to evaluate -- you could easily have no effect if the work was going to be done anyway (I think this bites harder here than in more typical granting). I'd love to see a well considered article on prizes, taking concerns like this into account.

Comment by toby_ord on Should You Move to a High Cost of Living City? · 2014-09-26T14:38:41.040Z · score: 3 (3 votes) · EA · GW

I think it's probably easier to directly find which cities have the highest salaries for your line of work, than to research which have the highest cost-of-living and hope that this correlates with a high salary for your line of work.

This seems right to me. While I like the original post, I think it makes the point seem more counterintuitive than it needs to be. Compare with:

Should you move to a city where you can earn more? Sadly cost of living will typically increase, but for people who are donating or saving a lot, this will be outweighed by the additional earnings.

Comment by toby_ord on Lawyering to Give · 2014-09-26T08:31:44.027Z · score: 7 (9 votes) · EA · GW

If someone could write a piece making these points in the Law Record or Above the Law, that might be very useful. Especially if they defused the issues from the original piece that Stefan mentions in his comment (for example, mentioning that Mystal's argument is an understandable reaction to a provocative piece before explaining that the original argument is actually very strong and that Mystal's response doesn't address the underlying logic at all). I don't know whether one would have to be a legal professional/student in order to write this though. Perhaps Harvard EA knows some Harvard students well placed to co-author something like this?

Comment by toby_ord on How to Make Your Article Have Consistent Formatting · 2014-09-26T08:27:13.338Z · score: 2 (2 votes) · EA · GW

For advice on formatting comments correctly, click the 'Show help' button at the bottom right of the comment box. It tells you how to do italics, bold, links, bullet points, and (most importantly?) how to do block quotes from the comment you are replying to.

Comment by toby_ord on Introduce Yourself · 2014-09-24T10:38:38.601Z · score: 2 (2 votes) · EA · GW

Thanks for the detailed introduction Evan. Any of those bullet points would make interesting openings for discussion on an open thread (or could be elaborated into a post).

Comment by toby_ord on Introduce Yourself · 2014-09-23T15:14:49.993Z · score: 9 (9 votes) · EA · GW

Hello, I'm Toby. I've been involved with effective altruism since 2005 when I started putting together the ideas behind Giving What We Can. I co-founded Giving What We Can in 2009 and the Centre for Effective Altruism in 2011. I'm currently the president of the former and a trustee of the latter. I am a Research Fellow at the Future of Humanity Institute at Oxford University. I work on global poverty, global health, global priority setting, and existential risk. This involves academic research as well as policy work with governments, NGOs, and foundations. I am just about to begin a grant on population ethics, so I'll be spending more of my academic time on understanding the theoretical and practical issues surrounding how we should value future generations.

Comment by toby_ord on Open Thread · 2014-09-22T09:52:31.903Z · score: 1 (1 votes) · EA · GW

You might be interested in this chapter on global poverty, utilitarianism, Christian ethics and Peter Singer that I wrote for a Cambridge University Press volume.

http://www.amirrorclear.net/academic/papers/global-poverty.pdf

Comment by toby_ord on Minor Updates · 2014-09-19T10:20:30.404Z · score: 1 (1 votes) · EA · GW

Though ideally not in pink.

Comment by toby_ord on Disability Weights · 2014-09-19T10:09:58.424Z · score: 4 (4 votes) · EA · GW

This is for the reason I outlined in another comment. This revised ranking uses people's judgments of which state is more 'healthy', rather than how good they are. People don't think that being infertile makes people much less 'healthy' even though they think it is bad. The same goes for reductions in intelligence such as via lead poisoning.

Comment by toby_ord on Disability Weights · 2014-09-13T08:51:29.113Z · score: 0 (0 votes) · EA · GW

"it just means it's an easy comparison for people to make"

Yes, I completely agree and would have spelled it out just like you did in my comment further down if I'd had the time.

Comment by toby_ord on Disability Weights · 2014-09-12T12:25:24.830Z · score: 2 (2 votes) · EA · GW

You are correct. You can't really turn the ordinal stuff into a cardinal ordering, just into a kind of proxy ordering that has some cardinal structure, but it might not correspond to the cardinal structure we care about. For example if 'perfect health' was added and 100% of people ranked this above the other choice, then it would end up very far (possibly infinitely far) from the nearest option on the cardinal scale. What it is really measuring is the amount of disagreement about things at this part of the ordering, which is a proxy for closeness of the health levels, but there are cases like 'perfect health' vs slightly worse than that where they are close but there is no disagreement.

Comment by toby_ord on Disability Weights · 2014-09-12T11:46:48.406Z · score: 10 (10 votes) · EA · GW

Thanks for this great summary Jeff!

Here are a couple of comments:

1) I happen to know quite a bit about the rationale behind the GBD 2010 method as I was involved near the end of the process. It is designed to avoid talking about evaluative questions of the quality or value of life and to only talk about the descriptive question of the level of health -- something that doctors are meant to plausibly have more expertise on. This change avoids certain critiques of the method, but I and many other philosophers and economists think that it is quite a bit worse overall and possibly incoherent. At least for effective altruism, we only care about health states in terms of answering normative questions of which option to choose and here we care about the evaluative measures of quality. Notably these come apart from the descriptive ones in cases like intellectual disability and infertility. People don't rate people with these conditions as much less healthy, but they do agree that their lives are made quite a bit worse. When things come apart like this, it is the badness in their lives that should matter. I was more sanguine about this before I read your article as I'd heard there was at least a strong correlation between these new numbers and the old ones, but your quantitative correlation chart shows that it is not that strong. I'd thus use one of the earlier approaches, or one of the many QALY type approaches that have been done in parallel with these DALY ones.

2) Regarding scepticism about the weightings, it is not like there is any other sensible option but to use them (well, one version or another of them). Using one's own intuition about how bad two health states are is obviously worse than at least one of the current aggregate measures, as is considering all ill-health to be equally bad. Rejecting these aggregate health quality weights means using some other form of health quality weight which will be worse. It is OK to think that these quality weight numbers introduce another level of noise into cost-effectiveness -- they do! -- and we don't have any better options but to use them. Also, the noise introduced is not all that much compared to the signal (I'd say it introduces less than a factor of 2, when the data shows many things separated by factors of 100 or more), so the results can still be used for many purposes.

Comment by toby_ord on Disability Weights · 2014-09-12T11:30:05.735Z · score: 3 (3 votes) · EA · GW

Yes, I thought this too. Something like ELO scores used to generate cardinal rankings for chess players from the ordinal data of their previous match results.

Comment by toby_ord on Cosmopolitanism · 2014-09-10T15:08:03.207Z · score: 8 (8 votes) · EA · GW

"When they do, they call it “impartiality,” which is too vague. Judges, juries, journalists, and so on (to limit myself to the “j”s) are expected to be impartial in a sense, but only in specific domains."

Philosophers would say 'impartial benevolence', which is more clear about the domain of impartiality. I think we have found the 'benevolence' part to be unnecessary to add given that the topic is altruism, which is roughly equivalent to benevolence (i.e. helping others). I think 'impartiality' works OK, so long as it is understood as one of the properties that define what is different about our type of altruism.

Comment by toby_ord on A relatively atheoretical perspective on astronomical waste · 2014-08-14T15:59:00.000Z · score: 1 (1 votes) · EA · GW

Assuming I'm understanding the principle of scale correctly, I would have thought that the Average View is an example of something where Scale holds, but Separability fails. As it seems that whenever Scale is applied, the population is the same size in both cases (via a suppressed other-things-equal clause).

Comment by toby_ord on Parenthood and effective altruism · 2014-04-22T17:35:00.000Z · score: 0 (0 votes) · EA · GW

Brian, there are several serious sampling biases in most estimates of long run real returns which tend to overestimate the returns. These include:

(1) Time selection bias. The 20th century was unprecedentedly good for stocks. If we instead averaged over wider periods or over the 21st century so far, we get much lower numbers. It is unclear what is the best period to use, but many estimates use the most optimistic one which is suspect. (2) Country selection bias. The US has done unprecedentedly well with stocks. International comparisons give lower returns and are probably more representative of the future (we don't know which country will do best this time round). (3) Within-index selection bias. The major indices are of the top stocks rather than a fixed set, so for example if all the stocks in the S&P 500 went to zero tomorrow, this would really change the real rate of return, but wouldn't change the index that much as the next 500 stocks would replace them -- we need to adjust for that. (4) Between-exchange selection bias. Even attempts to adjust for the country selection bias by using a range of stock markets or indices in different countries often overestimate returns because failed stock markets typically don't appear in the later data for they have ceased to exist. One needs to carefully adjust for this.

I don't recall the exact real returns when these things are adjusted for and can't quickly find a good estimate, but I seem to recall it comes down to less than 3%. If someone has a pointer to a good estimate, I'd love to see it.

Regarding risk adjustment, I didn't mean risk aversion, just that you have to adjust for the chance of losses as well as gains to get an expected rate. Any sensible aggregate will do this.

Comment by toby_ord on Parenthood and effective altruism · 2014-04-16T17:47:00.000Z · score: 1 (1 votes) · EA · GW

Hi Pablo,

You are right that the £2,000 per year per parent lifetime cost would be better if it included an adjustment for the fact that the cost aren't evenly distributed over that timespan. However, they are distributed over twenty years of that time and I think the amortization calculator you used assumes it is all paid as a lump sum on year one. I set up a spreadsheet to allocate the costs evenly over the first twenty years and then look for which amount this was equivalent to paying if the costs were spread out over the whole 50 years. This was £3,000 per parent per year, which is higher than the £2,000, but quite a bit less than the £4,700. This is still not perfect as the costs are skewed a bit towards the early and late years, but it should be pretty close to the right model.

I also think that 5% above inflation is substantially higher than the best estimates of the risk adjusted rate of return. Using 3%, the cost per annum drops to £2,500, which is pretty close to the original unadjusted estimate.

(Note that you might want something even higher than 5% if you would really like to spend/donate money a lot sooner, but if so, you should also be taking out loans in order to donate more sooner and I've never met anyone doing that).