Posts

Which World Gets Saved 2018-11-09T18:08:24.632Z · score: 87 (48 votes)
RPTP Is a Strong Reason to Consider Giving Later 2018-10-01T16:27:22.590Z · score: 13 (12 votes)

Comments

Comment by trammell on Should I claim COVID-benefits I don't need to give to charity? · 2020-05-15T15:47:04.054Z · score: 14 (4 votes) · EA · GW

I don't know what counts as a core principle of EA exactly, but most people involved with EA are quite consequentialist.

Whatever you should in fact do here, you probably wouldn't find a public recommendation to be dishonest. On purely consequentialist grounds, after accounting for the value of the reputation of the EA community and so on, what community guidelines (and what EA Forum advice) do you think would be better to write: those that go out of their way to emphasize honesty or those that sound more consequentialist?

Comment by trammell on Existential Risk and Economic Growth · 2020-05-12T11:08:46.856Z · score: 1 (1 votes) · EA · GW

I'm just putting numbers to the previous sentence: "Say the current (instantaneous) hazard rate is 1% per century; my guess is that most of this consists of (instantaneous) risk imposed by existing stockpiles of nuclear weapons, existing climate instability, and so on, rather than (instantaneous) risk imposed by research currently ongoing."

If "most" means "80%" there, then halting growth would lower the hazard rate from 1% to 0.8%.

Comment by trammell on Existential Risk and Economic Growth · 2020-05-10T17:02:19.113Z · score: 2 (2 votes) · EA · GW

Hey, thanks for engaging with this, and sorry for not noticing your original comment for so many months. I agree that in reality the hazard rate at t depends not just on the level of output and safety measures maintained at t but also on "experiments that might go wrong" at t. The model is indeed a simplification in this way.

Just to make sure something's clear, though (and sorry if this was already clear): Toby's 20% hazard rate isn't the current hazard rate; it's the hazard rate this century, but most of that is due to developments he projects occurring later this century. Say the current (instantaneous) hazard rate is 1% per century; my guess is that most of this consists of (instantaneous) risk imposed by existing stockpiles of nuclear weapons, existing climate instability, and so on, rather than (instantaneous) risk imposed by research currently ongoing. So if stopping growth would lower the hazard rate, it would be a matter of moving from 1% to 0.8% or something, not from 20% to 1%.

Comment by trammell on How can I apply person-affecting views to Effective Altruism? · 2020-04-29T12:54:53.754Z · score: 9 (7 votes) · EA · GW

This paper is also relevant to the EA implications of a variety of person-affecting views. https://globalprioritiesinstitute.org/wp-content/uploads/2020/Teruji_Thomas_asymmetry_uncertainty.pdf

Comment by trammell on Phil Trammell: The case for ignoring the world’s current problems — or how becoming a ‘patient philanthropist’ could allow you to do far more good · 2020-04-16T22:14:55.273Z · score: 3 (2 votes) · EA · GW

Glad you liked it, and thanks for the good questions!

#1: I should definitely have spent more time on this / been more careful explaining it. Yes, x-risks should “feed straight into interest rates”, in the sense that a +1% chance of an x-risk per year should mean a 1% higher interest rate. So if you’re going to be

  • spending on something other than x-risk reduction; or
  • spending on x-risk reduction but only able to marginally lower the risk in the period you’re spending (i.e. not permanently lower the rate), and think that there will still be similar risk to mitigate in the next period conditional on survival,

then you should be roughly compensated for the risk. That is, under those circumstances, if investing seemed preferable to spending in the absence of the heightened risk, it should still seem that way given the heightened risk. This does all hold despite the fact that the heightened risk would give humanity such a short life expectancy.

But I totally grant that these assumptions may not hold, and that if they don’t, the heightened risk can be a reason to spend more! I just wanted to point out that there is this force pushing the other way that turns out to render the question at least ambiguous.

#2: No, there’s no reductio here. Once you get big enough, i.e. are no longer a marginal contributor to the public goods you’re looking to fund, the diminishing returns to spending make it less worthwhile to grow even bigger. (E.g., in the human consumption case, you’ll eventually be rich enough that spending the first half of your fund would make people richer to the point that spending the second half would do substantially less for them.) Once the gains from further investing fallen to the point that they just balance the (extinction / expropriation / etc) risks, you should start spending, and continue to split between spending and investment so as to stay permanently on the path where you’re indifferent between the two.

If you're looking to fund some narrow thing only one other person's interested in funding, and you're perfectly patient but the other person is about as impatient as people tend to be, and if you start out with funds the same size, I think you'll be big enough that it's worth starting to spend after about fifty years. If you're looking to spend on increasing human consumption in general, you'll have to hold out till you're a big fraction of global wealth--maybe on the order of a thousand years. (Note that this means that you'd probably never make it, even though this is still the expected-welfare-maximizing policy.)

#3: Yes. If ethics turns out to contain pure time preference after all, or we have sufficiently weak duties to future generations for some other reason, then patient philanthropy is a bad idea. :(

Comment by trammell on On Waiting to Invest · 2020-04-11T15:20:49.293Z · score: 1 (1 votes) · EA · GW

Glad you liked it!

In the model I'm working on, to try to weigh the main considerations, the goal is to maximize expected philanthropic impact, not to maximize expected returns. I do recommend spending more quickly than I would in a world where the goal were just to maximize expected returns. My tentative conclusion that long-term investing is a good idea already incorporates the conclusion that it will most likely just involve losing a lot of money.

That is, I argue that we're in a world where the highest-expected-impact strategy (not just the highest-expect-return strategy) is one with a low probability of having a lot of impact and a high probability of having very little impact.

Comment by trammell on If you value future people, why do you consider near term effects? · 2020-04-10T18:29:57.123Z · score: 1 (1 votes) · EA · GW

At the risk of repetition, I’d say that by the same reasoning, we could likewise add in our best estimates of saving a life on (just, say) total human welfare up to 2100.

Your response here was that “[p]opulation growth will be net good or bad depending on my credences about what the future would have looked like, but these credences are not robust”. But as with the first beneficiary, we can separate the direct welfare impact of population growth from all its other effects and observe that the former is a part of “sum u_i”, no?

Of course, estimates of shorter-term effects are usually more reliable than those of longer-term effects, for all sorts of reasons; but since we’re not arguing over whether saving lives in certain regions can be expected to increase population size up to 2100, that doesn’t seem to me like the point of dispute in this case.

I’m not sure where we’re failing to communicate exactly, but I’m a little worried that this is clogging the comments section! Let me know if you want to really try to get to the bottom of this sometime, in some other context.

Comment by trammell on On Waiting to Invest · 2020-04-10T16:46:08.808Z · score: 3 (2 votes) · EA · GW

Yup, no disagreement here. You're looking at what happens when we introduce uncertainty holding the absolute expected return constant, and I was discussing what happens when we introduce uncertainty holding the expected annual rate of return constant.

Comment by trammell on If you value future people, why do you consider near term effects? · 2020-04-10T09:04:17.681Z · score: 3 (2 votes) · EA · GW
If you give me a causal model, and claim A has a certain effect on B, without justifying rough effect sizes, I am by default skeptical of that claim and treat that like simple cluelessness: B conditional on changing A is identically distributed to B. You have not yet justified a systematic effect of A on B.

What I'm saying is, "Michael: you've given me a causal model, and claimed A (saving lives) has a positive effect on B (total moral value in the universe, given all the indirect effects), without justifying a rough effect size. You just justified a rough effect size on C (value to direct beneficiaries), but that's not ultimately what matters. By default I think A has no systematic effect on B, and you have not yet justified one."

Is this an example of CC?

Yes, you have CC in that circumstance if you don't have evidential symmetry with respect to X.

Comment by trammell on On Waiting to Invest · 2020-04-10T00:13:39.034Z · score: 9 (7 votes) · EA · GW

Hey, I know that episode : )

Thanks for these numbers. Yes: holding expected returns equal, our propensity to invest should be decreasing in volatility.

But symmetric uncertainty about the long-run average rate of return—or to a lesser extent, as in your example, time-independent symmetric uncertainty about short-run returns at every period—increases expected returns. (I think this is the point I made that you’re referring to.) This is just the converse of your observation that, to keep expected returns equal upon introducing volatility, we have to lower the long-run rate from r to q = r – s^2/2.

Whether these increased expected returns mean that patient philanthropists should invest more or less than they would under certainty is in principle sensitive to (a) the shape of the function from resources to philanthropic impact and (b) the behavior of other funders of the things we care about; but on balance, on the current margin, I’d argue it implies that patient philanthropists should invest more. I’ll try writing more on this at some point, and apologies if you would have liked a deeper discussion about this on the podcast.

Comment by trammell on If you value future people, why do you consider near term effects? · 2020-04-09T20:10:09.906Z · score: 1 (1 votes) · EA · GW
Population growth will be net good or bad depending on my credences about what the future would have looked like, but these credences are not robust.

Suppose for simplicity that we can split the effects of saving a life into

1) benefits accruing to the beneficiary;

2) benefits accruing to future generations up to 2100, through increased size (following from (1)); and

3) further effects (following from (2)).

It seems like you're saying that there's some proposition X such that (3) is overall good if X and bad if not-X, where we can only guess at the probability of X; and that in this circumstance we can say that the overall effect of (2 & 3) is ~zero in expectation.

If that's right, what I'm struggling to see is why we can't likewise say that there's some proposition Y such that (2 & 3) is overall good if Y and bad if not-Y, where we can only guess at the probability of Y, and that the overall effect of (1 & 2 & 3) is therefore ~zero in expectation.

Comment by trammell on If you value future people, why do you consider near term effects? · 2020-04-09T14:44:34.485Z · score: 1 (1 votes) · EA · GW
Is the point that I'm confident they're larger in magnitude, but still not confident enough to estimate their expected magnitudes more precisely?

Yes, exactly—that’s the point of the African population growth example.

Maybe I have a good idea of the impacts over each possible future, but I'm very uncertain about the distribution of possible futures. I could be confident about the sign of the effect of population growth when comparing pairs of counterfactuals, one with the child saved, and the other not, but I'm not confident enough to form distributions over the two sets of counterfactuals to be able to determine the sign of the expected value.

I don’t understand this paragraph. Could you clarify?

I don’t think I understand this either:

I'm doubting the signs of the effects that don't come with estimates. If I have a plausible argument that doing X affects Y and Y affects Z, which I value directly and the effect should be good, but I don't have an estimate for the effect through this causal path, I'm not actually convinced that the effect through this path isn't bad.

Say you have a plausible argument that pushing a switch (doing X) pulls some number n > 0 of strings (so Y := #strings_pulled goes from 0 to n), each of which releases some food to m > 0 hungry lab mice (so Z := #fed_mice goes from 0 to nm), and you know that X and Y have no other consequences. You know that n, m > 0 but don't have estimates for them. At face value you seem to be saying you’re not convinced that the effect of pushing the switch isn’t bad, but that can’t be right!

Comment by trammell on If you value future people, why do you consider near term effects? · 2020-04-08T23:29:48.084Z · score: 4 (3 votes) · EA · GW

No worries, sorry if I didn't write it as clearly as I could have!

BTW, I've had this conversation enough times now that last summer I wrote down my thoughts on cluelessness in a document that I've been told is pretty accessible—this is the doc I link to from the words "don't have an expected value". I know it can be annoying just to be pointed off the page, but just letting you know in case you find it helpful or interesting.

Comment by trammell on If you value future people, why do you consider near term effects? · 2020-04-08T23:15:10.876Z · score: 1 (1 votes) · EA · GW

Hold on—now it seems like you might be talking past the OP on the issue of complex cluelessness. I 1000% agree that changing population size has many effects beyond those I listed, and that we can't weigh them; but that's the whole problem!

The claim is that CC arises when (a) there are both predictably positive and predictably negative indirect effects of (say) saving lives which are larger in magnitude than the direct effects, and (b) you can't weigh them all against each other so as to arise at an all-things-considered judgment of the sign of the value of the intervention.

A common response to the phenomenon of CC is to say, "I know that the direct effects are good, and I struggle to weigh all of the indirect effects, so the latter are zero for me in expectation, and the intervention is appealing". But (unless there's a strong counterargument to Hilary's observation about this in "Cluelessness" which I'm unaware of), this response is invalid. We know this because if this response were valid, we could by identical reasoning pick out any category of effect whose effects we can estimate—the effect on farmed chicken welfare next year from saving a chicken-eater's life, say—and say "I know that the next-year-chicken effects are bad, and I struggle to weigh all of the non-next-year-chicken effects, so the latter are zero for me in expectation, and the intervention is unappealing".

The above reasoning doesn't invalidate that kind of response to simple cluelessness, because there the indirect effects have a feature—symmetry—which breaks when you cut up the space of consequences differently. But this means that, unless one can demonstrate that the distribution of non-direct effects has a sort of evidential symmetry that the distribution of non-next-year-chicken effects does not, one is not yet in a position to put a sign to the value of saving a life.

So, the response to

What's the expected value (on net) of the indirect effects to you? Is its absolute value much greater than the direct effects' expected value?

is that, given an inability to weigh all the effects, and an absence of evidential symmetry, I simply don't have an expected value (or even a sign) of the indirect effects, or the total effects, of saving a life.

Does that clarify things at all, or am I the one doing the talking-past?

Comment by trammell on If you value future people, why do you consider near term effects? · 2020-04-08T21:40:41.540Z · score: 6 (4 votes) · EA · GW

Agreed that, at least from a utilitarian perspective, identity effects aren't what matter and feel pretty symmetrical, and that they're therefore not the right way to illustrate complex cluelessness. But when you say

you need an example where you can justify that the outcome distributions are significantly different. I actually haven't been convinced that this is the case for any longtermist intervention

—maybe I'm misunderstanding you, but I believe the proposition being defended here is that the distribution of long-term welfare outcomes from a short-termist intervention differs substantially from the status quo distribution of long-term welfare outcomes (and that this distribution-difference is much larger than the intervention's direct benefits). Do you mean that you're not convinced that this is the case for any short-termist intervention?

Even though we don't know the magnitudes of today's interventions' long-term effects, I do think we can sometimes confidently say that the distribution-difference is larger than the direct effect. For instance, the UN's 95% confidence interval is that the population of Africa will multiply by about 3x to 5x by 2100 (here, p.7). One might think their confidence interval should be wider, but I don't see why the range would be upwards-biased in particular. Assuming that fertility in saved children isn't dramatically lower than population fertility, this strikes me as a strong reason to think that the indirect welfare effects of saving a young person's life in Africa today—indeed, even a majority of the effects on total human welfare before 2100—will be larger than the direct welfare effect.

Saving lives might lower fertility somewhat, thus offsetting this effect, But the (tentative) conclusion of what I believe is the only in-depth investigation on this is that there are some regions in which this offsetting is negligible. And note that if those UN projections are any guide, the fertility-lowering effect would have to be not just non-negligible but very close to complete for the direct welfare effects to outweigh the indirect.

Does that seem wrong to you?

Comment by trammell on Is Existential Risk a Useless Category? Could the Concept Be Dangerous? · 2020-04-02T07:36:59.050Z · score: 4 (2 votes) · EA · GW

No.

Comment by trammell on Is Existential Risk a Useless Category? Could the Concept Be Dangerous? · 2020-04-01T10:23:14.578Z · score: 17 (9 votes) · EA · GW

Yeah, agreed that using the white supremacist label needlessly poisons the discussion in both cases.

For whatever it’s worth, my own tentative guess would actually be that saving a life in the developing world contributes more to growth in the long run than saving a life in the developed world. Fertility in the former is much higher, and in the long run I expect growth and technological development to be increasing in global population size (at least over the ranges we can expect to see).

Maybe this is a bit off-topic, but I think it’s worth illustrating that there’s no sense in which the longtermist discussion about saving lives necessarily pushes in a so-called “white supremacist” direction.

Comment by trammell on Is Existential Risk a Useless Category? Could the Concept Be Dangerous? · 2020-03-31T21:46:58.482Z · score: 28 (14 votes) · EA · GW

Thanks for pointing that out!

For those who might worry that you're being hyperbolic, I'd say that the linked paper doesn't say that they are white supremacists. But it does claim that a major claim from Nick Beckstead's thesis is white supremacist. Here is the relevant quote, from pages 27-28:

"As he [Beckstead] makes the point,

>> saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive. By ordinary standards, at least by ordinary enlightened humanitarian standards, saving and improving lives in rich countries is about equally as important as saving and improving lives in poor countries, provided lives are improved by roughly comparable amounts. But it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.

This is overtly white-supremacist."

The document elsewhere clarifies that it is using the term white supremacism to refer to systems that reinforce white power, not only to explicit, conscious racism. But I agree that this is far enough from how most people use the terminology that it doesn't seem like a very helpful contribution to the discussion.

Comment by trammell on Why not give 90%? · 2020-03-23T14:58:32.045Z · score: 7 (7 votes) · EA · GW

I downvoted the comment because it's off-topic.

Comment by trammell on Phil Trammell: The case for ignoring the world’s current problems — or how becoming a ‘patient philanthropist’ could allow you to do far more good · 2020-03-18T21:11:25.263Z · score: 5 (4 votes) · EA · GW

Thanks!

I'm far from qualified to give career advice to people who are already full-time academics, but I suppose I'd say,

  • If you've just graduated and are looking for a post-doc opportunity, or are coming from outside academia and willing to move to Oxford, then apply to GPI.
  • If you're already an academic elsewhere, then get in touch, come to one of the workshops GPI holds at the end of each Oxford term, and try shifting your research in a GPR direction. (We put together such a long research agenda partly in the hope that lots of interested researchers elsewhere will find something in it that they can get excited about.)
  • If you're a senior enough academic that you could set up a respectable global priorities research center elsewhere, then definitely get in touch! That could turn out to be a great idea, especially if you're an economist at a higher-ranked department than Oxford's. Forethought--GPI's sister org, which funds GPR activity outside of Oxford--would be a place to apply for funding for a project along those lines.
Comment by trammell on Doing good is as good as it ever was · 2020-01-22T22:56:17.916Z · score: 8 (5 votes) · EA · GW

I don't know if there is lower community morale of the sort you describe--you're better positioned to have a sense of that than I am--but to the extent that there is, yes, it seems we disagree about whether to suspect that cluelessness would be a significant factor.

It would be interesting to include a pair of questions on the next EA survey about whether people feel more or less charitably motivated than last year, and, if less, why.

Comment by trammell on Doing good is as good as it ever was · 2020-01-19T14:54:37.988Z · score: 10 (6 votes) · EA · GW

If I'm not misunderstanding you, being less enthusiastic than before just requires (i) (if by "the long-termist thesis" we mean the moral claim that we should care about the long term) and (iii). I don't think that's a lot of requirements. Plus, this is all in a framework of precise expectations; you could also just think that the long-term effects are ambiguous enough to render the expected value undefined, and endorse a decision theory which penalizes this sort of ambiguity.

My guess is that when people start thinking about longtermism and get less excited about ordinary do-gooding, this is often at least in part due either to a belief in (iii) or, more commonly, to the realization of the ambiguity, even when this isn't articulated in detail. That seems likely to me (a) because, anecdotally, it seems relatively common for people raise concerns along these lines independently after thinking about this stuff for a while and (b) because there has been some push to believe in this ambiguity, namely all the writing on cluelessness. But of course that's just a guess.

Comment by trammell on Doing good is as good as it ever was · 2020-01-18T18:47:36.111Z · score: 41 (21 votes) · EA · GW

I disagree with the common framing that saving lives and so on constitute one straightforward, unambiguous way to do good, and that longtermism just constitutes or motivates some interventions with the potential to do even more good.

It seems to me (and I'm not alone, of course) that concern for the long term renders the sign of the value most of the classic EA interventions ambiguous. In any event, it renders the magnitude of their value more ambiguous than it is if one disregards flow-through effects of all kinds. If

  • accounting for long term consequences lowers the expected value (or whatever analog of expected value we use in the absence of precise expectations) of classic EA interventions, in someone's mind, and
  • she's not persuaded that any other interventions--or, any she can perform--offer as high (quasi-)expected value, all things considered, as the classic EA interventions offer after disregarding flow-through effects,

then I think it's reasonable for her to feel less happy about how much good she can do as she becomes more concerned about the long term.

For the record, I don't know how common this feeling is, or how often people feel more excited about their ability to save lives and so on than they did a few years ago. One could certainly think that saving lives, say, has even more long-term net positive effects than short-term positive effects. I just want to say that when someone says that they feel less excited about how much good they can do, and that longtermism has something to do with that, that could be justified. They might just be realizing that doing good isn't and never was as good as they thought it was.

Comment by trammell on Ramiro's Shortform · 2020-01-17T14:10:13.811Z · score: 5 (4 votes) · EA · GW

Yes, governments lower the SDR as the interest rate changes. See for example the US Council of Economic Advisers's recommendation on this three years ago: https://obamawhitehouse.archives.gov/sites/default/files/page/files/201701_cea_discounting_issue_brief.pdf

While the "risk-free" interest rate is roughly zero these days, the interest rate to use when discounting payoffs from a public project is the rate of return on investments whose risk profile is similar to that of the public project in question. This is still positive for basically any normal public project.

Comment by trammell on A collection of researchy projects for Aspiring EAs · 2019-12-02T15:43:46.371Z · score: 2 (2 votes) · EA · GW

Thanks, looks like a useful resource!

For some EA-motivated research project ideas in economics and philosophy, hopefully the GPI Research Agenda also serves as a useful resource.

(Edit: I see that the document links to Effective Thesis's list of research agendas, of which GPI's is one. Sorry for the redundancy.)

Comment by trammell on Existential Risk and Economic Growth · 2019-10-26T15:23:00.005Z · score: 11 (3 votes) · EA · GW

Still no summary of the paper as a whole, but if you're interested, I just wrote a really quick blog post which summarizes one takeaway. https://philiptrammell.com/blog/45

Comment by trammell on Are we living at the most influential time in history? · 2019-09-12T22:02:05.377Z · score: 8 (6 votes) · EA · GW

Interesting finds, thanks!

Similarly, people sometimes claim that we should discount our own intuitions of extreme historic importance because people often feel that way, but have so far (at least almost) always been wrong. And I’m a bit skeptical of the premise of this particular induction. On my cursory understanding of history, it’s likely that for most of history people saw themselves as part of a stagnant or cyclical process which no one could really change, and were right. But I don’t have any quotes on this, let alone stats. I’d love to know what proportion of people before ~1500 thought of themselves as living at a special time.

Comment by trammell on Does any thorough discussion of moral parliaments exist? · 2019-09-08T12:19:59.689Z · score: 1 (1 votes) · EA · GW

Ah cool, thanks

Comment by trammell on Does any thorough discussion of moral parliaments exist? · 2019-09-07T12:03:36.742Z · score: 10 (6 votes) · EA · GW

Yeah, you're not the only one noticing the gap. Hilary and Owen have a paper under review somewhere formalizing it a bit more (I see you've linked to some slides Hilary put together on it), so keep an eye out for that.

Comment by trammell on Are we living at the most influential time in history? · 2019-09-04T21:46:22.928Z · score: 1 (1 votes) · EA · GW

And that P(simulation) > 0.

Comment by trammell on Are we living at the most influential time in history? · 2019-09-04T08:51:49.354Z · score: 2 (2 votes) · EA · GW

Also, even if one could say P(simulation | seems like HoH) >> P(not-simulation | seems like HoH), that wouldn’t be decision relevant, since t could just be that P(simulation) >> P(not-simulation) in either case. What matters is which observation (seems like HoH or not) renders it more likely that the observer is being simulated.

Comment by trammell on Are we living at the most influential time in history? · 2019-09-04T08:45:25.832Z · score: 2 (2 votes) · EA · GW

We have no idea if simulations are even possible! We can’t just casually assert “P(seems like HoH | simulation) > P(seems like HoH | not simulation)”! All that we can reasonably speculate is that, if simulations are made, they’re more likely to be of special times than of boring times.

Comment by trammell on Existential Risk and Economic Growth · 2019-09-03T14:16:19.470Z · score: 27 (12 votes) · EA · GW

As the one who supervised him, I too think it's a super exciting and useful piece of research! :)

I also like that its setup suggests a number of relatively straightforward extensions for other people to work on. Three examples:

  • Comparing (1) the value of an increase to B (e.g. a philanthropist investing / subsidizing investment in safety research) and (2) the value of improved international coordination (moving to the "global impatient optimum" from a "decentralized allocation" of x-risk mitigation spending at, say, the country level) to (3) a shock to growth and (4) a shock to the "rate of pure time preference" on which society chooses to invest in safety technology. (The paper currently just compares (3) and (4).)
  • Seeing what happens when you replace the N^(epsilon - beta) term in the hazard function with population raised to a new exponent, say N^(mu), to allow for some risky activities and/or safety measures whose contribution to existential risk depends not on the total spent on them but on the amount per capita spent on them, or something in between.
  • Seeing what happens when you use a different growth model--in particular, one that doesn't depend on population growth.
Comment by trammell on Are we living at the most influential time in history? · 2019-09-03T12:31:57.308Z · score: 2 (2 votes) · EA · GW

Cool, thanks for getting all these ideas out there!

Possible correction: You write "P(simulation | seems like HoH ) >> P(not-simulation | seems like HoH)". Shouldn't the term on the right just be "P(simulation | doesn't seem like HoH)"?

Comment by trammell on Ask Me Anything! · 2019-08-25T10:01:30.704Z · score: 22 (15 votes) · EA · GW

Thank you, I'm flattered! But remember, all: Will MacAskill saying we have good arguments doesn't necessarily mean we have good arguments :)

Comment by trammell on Effective Thesis project review · 2019-03-06T19:18:15.586Z · score: 3 (2 votes) · EA · GW

Forethought just launched one a few hours ago!

Comment by trammell on An integrated model to evaluate the impact of animal products · 2019-01-10T18:52:53.999Z · score: 3 (7 votes) · EA · GW

Neither here nor there, but while we're counting possible biases, it may also be worth considering the possibilities that

  • people who conclude that farm animals' lives are good may select into farming, and people who conclude that they're bad may select out, making farmers "more optimistic than others" even before the self-serving bias; and, pointing the other way,
  • people who enter animal advocacy on grounds other than total utilitarianism could then have some bias against concluding that farm animals have lives above hedonic zero, since it could render their past moral efforts counterproductive (and maybe even kind of embarrassing).
Comment by trammell on An integrated model to evaluate the impact of animal products · 2019-01-09T13:01:22.988Z · score: 4 (4 votes) · EA · GW

Thanks so much for putting this together! I hadn't thought of the cross-price elasticity effects across types of animal products, but of course it's an important thing to incorporate.

Two extensions of this sort of analysis that I would be interested to see:

  • Are there any important cross-price elasticity effects between animal and non-animal (including non-food) products? For instance, if the worst type of meat is beef, as you estimate, then it could be good to buy products that use the same inputs as beef--a type of grain that grows best on the types of land suitable for cattle, say--because that will push up the price of beef and push people into less harmful meat products. (It makes sense that cross-price elasticity effects would tend to be largest within kinds of meat, but other products may still worth considering, if this hasn't already been done.)
  • Just as the substitution effects across kinds of meat are presumably stronger than between meat and other things, the effects are presumably strongest within brands of a particular animal product. That is, maybe buying (less in-)humanely raised chicken or environmentally (less un-)friendly beef pushes up the price of that product in general, which causes people to consume less of it, leading to an improvement overall, even though the purchased product itself still does net damage. How much would these within-product considerations change things?

Obviously there's no end to the possible extensions, until we have a complete model of the entire economy that lets us estimate the general equilibrium impact of switching from one product to another. But maybe there are a few more elasticities that would be relatively important and tractable to consider.

Comment by trammell on Which World Gets Saved · 2018-12-11T15:06:37.707Z · score: 1 (1 votes) · EA · GW

Thanks!

This all strikes me as a good argument against putting much stock in the particular application I sketch out; maybe preventing a near-term nuclear war doesn't actually bode so badly for the subsequent future, because "human nature" is so malleable.

Just to be clear, though: I only brought up that example in order to illustrate the more general point about the conditional value of the future potentially depending on whether we have marginally averted some x-risk. The dependency could be mediated by one's beliefs about human psychology, but it could also be mediated by one's beliefs about technological development or many other things.

Comment by trammell on Which World Gets Saved · 2018-12-11T14:53:58.837Z · score: 2 (2 votes) · EA · GW

Thanks!

Just to be clear: my rough simplification of the "Pinker hypothesis" isn't that people have an all-around-peaceful psychology. It is, as you say, a hypothesis about how far we expect recent trends toward peace to continue. And in particular, it's the hypothesis that there's no hard lower bound to the "violence level" we can reach, so that, as we make technological and social progress, we will ultimately approach a state of being perfectly peaceful. The alternative hypothesis I'm contrasting this with is a future in which can we can only ever get things down to, say, one world war per century. If the former hypothesis isn't actually Pinker's, then my sincere apologies! But I really just mean to outline two hypotheses one might be uncertain between, in order to illustrate the qualitative point about the conditional value of the future.

That said, I certainly agree that moral circle expansion seems like a good thing to do, for making the world better conditional on survival, without running the risk of "saving a bad world". And I'm excited by Sentience's work on it. Also, I think it might have the benefit of lowering x-risk in the long run (if it really succeeds, we'll have fewer wars and such). And, come to think of it, it has the nice feature that, since it will only lower x-risk if it succeeds in other ways, it disproportionately saves "good worlds" in the end.

Comment by trammell on Which World Gets Saved · 2018-12-10T16:03:59.318Z · score: 1 (1 votes) · EA · GW

About the two objections: What I'm saying is that, as far as I can tell, the first common longtermist objection to working on x-risk reduction is that it's actually bad, because future human civilization is of negative expected value. The second is that, even if it is good to reduce x-risk, the resources spent doing that could better be used to effect a trajectory change. Perhaps the resources needed to reduce x-risk by (say) 0.001% could instead improve the future by (say) 0.002% conditional on survival.

About the decision theory thing: You might think (a) that the act of saving the world will in expectation cause more harm than good, in some context, but also (b) that, upon observing yourself engaged in the x-risk-reduction act, you would learn something about the world which correlates positively with your subjective expectation of the value of the future conditional on survival. In such cases, EDT would recommend the act, but CDT would not. If you're familiar with this decision theory stuff, this is just a generic application of it; there's nothing too profound going on here.

About the main thing: It sounds like you're pointing out that stocking bunkers full of canned beans, say, would "save the world" only after most of it has already been bombed to pieces, and in that event the subsequent future couldn't be expected to go so well anyway. This is definitely an example of the point I'm trying to make--it's an extreme case of "the expected value of the future not equaling the expected value of the future conditional on the fact that we marginally averted a given x-risk"--but I don't think it's the most general illustration. What I'm saying is that an attempt to save the world even by preventing it from being bombed to pieces doesn't do as much good as you might think, because your prevention effort only saves the world if it turns that there would have been the nuclear disaster but for your efforts. If it turns out (even assuming that we will never find out) that your effort is what saved us all from nuclear annihilation, that means we probably live in a world that is more prone to nuclear annihilation than we otherwise would have thought. And that, in turn, doesn't bode well for the future.

Does any of that make things clearer?

Comment by trammell on Which World Gets Saved · 2018-11-13T13:42:32.168Z · score: 3 (3 votes) · EA · GW

As long as any of NTI's effort is directed against intentional catastrophes, they're still saving violent-psychology worlds disproportionately, so in principle this could swing the balance. That said, good point: much of their work should reduce the risk of accidental catastrophes as well, so maybe there's not actually much difference between NTI and asteroid deflection.

(I won't take a stand here about what counts as evidence for what, for fear that this will turn into a big decision theory debate :) )

Comment by trammell on Pursuing infinite positive utility at any cost · 2018-11-13T08:33:31.497Z · score: 1 (1 votes) · EA · GW

I was just saying that, thankfully, I don’t think our decision problem is wrecked by the negative infinity cases, or the cases in which there are infinite amounts of positive and negative value. If it were, though, then okay—I’m not sure what the right response would be, but your approach of excluding everything from analysis but the “positive infinity only” cases (and not letting multiple infinities count for more) seems as reasonable as any, I suppose.

Within that framework, sure, having a few thousand believers in each religion would be better than having none. (It’s also better than having everyone believe in whichever religion seems most likely, of course.) I was just taking issue with “it might be best to encourage as many people as possible to adopt some form of religious belief to maximise our chances”.

Comment by trammell on Pursuing infinite positive utility at any cost · 2018-11-12T11:22:08.297Z · score: 4 (6 votes) · EA · GW
Still it might be best to encourage as many people as possible to adopt some form of religious belief to maximise our chances.

I'm very sympathetic to the idea that all we ought to be doing is to maximize the probability we achieve an infinite amount of value. And I'm also sympathetic to religion as a possible action plan there; the argument does not warrant the "incredulous stares" it typically gets in EA. But I don't think it's as simple as the above quote, for at least two reasons.

First, religious belief broadly specified could more often create infinite amounts of disvalue than infinite amounts of value, from a religious perspective. Consider for example the scenario in which non-believers get nothing, believers in the true god get plus infinity, and believers in false gods get minus infinity. Introducing negative infinities does wreck the analysis if we insist on maximizing expected utility, as Hajek points out, but not if we switch from EU to a decision theory based on stochastic dominance.

Second, and I think more importantly, religiosity might lower the probability of achieving infinite amounts of value in other ways. Belief in an imminent Second Coming, for instance, might lower the probability that we manage to create a civilization that lasts forever (and manages to permanently abolish suffering after a finite period).

Comment by trammell on Which World Gets Saved · 2018-11-11T21:16:42.969Z · score: 2 (2 votes) · EA · GW

Agreed.

Comment by trammell on Which World Gets Saved · 2018-11-11T15:16:06.935Z · score: 5 (4 votes) · EA · GW

Thanks! And cool, I hadn’t thought of that connection, but it makes sense—we want our x-risk reduction “investments” to pay off more in the worlds where they’ll be more valuable.

Comment by trammell on Which World Gets Saved · 2018-11-10T08:59:40.190Z · score: 10 (6 votes) · EA · GW

I agree that it’s totally plausible that, once all the considerations are properly analyzed, we’ll wind up vindicating the existential risk view as a simplification of “maximize utility”. But in the meantime, unless one is very confident or thinks doom is very near, “properly analyze the considerations” strikes me as a better simplification of “maximize utility”.

Comment by trammell on RPTP Is a Strong Reason to Consider Giving Later · 2018-11-09T21:04:51.313Z · score: 6 (2 votes) · EA · GW

Thanks!

And if you have any particular ways you think this post still overstates its case, please don't hesitate to point them out.

Comment by trammell on RPTP Is a Strong Reason to Consider Giving Later · 2018-11-08T10:53:23.356Z · score: 3 (3 votes) · EA · GW

My current best guess happens to be that there aren't great funding opportunities in the "priorities research" space--for a point of reference, GPI is still sitting on cash while it decides which economist(s) to recruit--but that there will be better funding opportunities over the next few years, as the infrastructure gets better set up and as the pipeline of young EA economists starts flowing. For example I'd actually be kind of surprised if there weren't a "Parfit Institute" (or whatever it might be called), writing policy papers in DC next door to Cato and Heritage and all the rest, in a decade or two. So at the moment I'm just holding out for opportunities like that. But if you have ideas for funding-constrained research right now, let me know!

And sure, I'd love to discuss/comment on that write-up!

Comment by trammell on RPTP Is a Strong Reason to Consider Giving Later · 2018-11-07T14:53:44.777Z · score: 3 (3 votes) · EA · GW

Yes, I agree with this wholeheartedly--there are ways for money to be put to use now accelerating the research process, and those might well beat waiting. In fact (as I should have been far clearer about throughout this post!) this whole argument is really just directed at people who are planning to "spend money at some time t to increase welfare as efficiently as possible at time t".

I'm hoping to write down a few thoughts soon about one might think about discounting if you'll be spending the money on something else, like research or x-risk reduction. For now I'll edit the post to make caveats like yours explicit. Thanks.