Posts

Which World Gets Saved 2018-11-09T18:08:24.632Z · score: 93 (49 votes)
RPTP Is a Strong Reason to Consider Giving Later 2018-10-01T16:27:22.590Z · score: 14 (13 votes)

Comments

Comment by trammell on 'Existential Risk and Growth' Deep Dive #2 - A Critical Look at Model Conclusions · 2020-08-25T10:43:07.776Z · score: 8 (5 votes) · EA · GW

Sure, I see how making people more patient has more-or-less symmetric effects on risks from arms race scenarios. But this is essentially separate from the global public goods issue, which you also seem to consider important (if I'm understanding your original point about "even the largest nation-states being only a small fraction of the world"), which is in turn separate from the intergenerational public goods issue (which was at the top of my own list).

I was putting arms race dynamics lower than the other two on my list of likely reasons for existential catastrophe. E.g. runaway climate change worries me a bit more than nuclear war; and mundane, profit-motivated tolerance for mistakes in AI or biotech (both within firms and at the regulatory level) worry me a bit more than the prospect of technological arms races.

That's not a very firm belief on my part--I could easily be convinced that arms races should rank higher than the mundane, profit-motivated carelessness. But I'd be surprised if the latter were approximately none of the problem.

Comment by trammell on 'Existential Risk and Growth' Deep Dive #2 - A Critical Look at Model Conclusions · 2020-08-24T09:14:33.056Z · score: 12 (7 votes) · EA · GW

I agree that the world underinvests in x-risk reduction (/overspends on activities that increase x-risk as a side effect) for all kinds of reasons. My impression would be that the two most important reasons for the underinvestment are that existential safety is a public good on two fronts:

  • long-term (but people just care about the short term, and coordination with future generations is impossible), and
  • global (but governments just care about their own countries, and we don't do global coordination well).

So I definitely agree that it's important that there are many actors in the world who aren't coordinating well, and that accounting for this would be an important next step.

But my intuition is that the first point is substantially more important than the second, and so the model assumes away much but not close to all of the problem. If the US cared about the rest of the world equally, that would multiply its willingness to pay for an increment of x-risk mitigation by maybe an order of magnitude. But if it had zero pure time preference but still just cared about what happened within its borders (or something), that would seem to multiply the WTP by many orders of magnitude.

Comment by trammell on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-21T23:56:38.253Z · score: 1 (1 votes) · EA · GW

Thanks! No need to inflict another recording of my voice on the world for now, I think, but glad to hear you like how the project coming.

Comment by trammell on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-18T23:00:53.585Z · score: 10 (4 votes) · EA · GW

The post cites the Stern discussion to make the point that (non-discounted) utilitarian policymakers would implement more investment, but to my mind that’s quite different from the point that absent cosmically exceptional short-term impact the patient longtermist consequentialist would save. Utilitarian policymakers might implement more redistribution too. Given policymakers as they are, we’re still left with the question of how utilitarian philanthropists with their fixed budgets should prioritize between filling the redistribution gap and filling the investment gap.

In any event, if you/Owen have any more unpublished pre-2015 insights from private correspondence, please consider posting them, so those of us who weren’t there don’t have to go through the bother of rediscovering them. : )

Comment by trammell on The case of the missing cause prioritisation research · 2020-08-17T23:14:35.016Z · score: 2 (2 votes) · EA · GW

Thanks! I agree that people in EA—including Christian, Leopold, and myself—have done a fair bit of theory/modeling work at this point which would benefit from relevant empirical work. I don’t think this is what either of the current new economists will engage in anytime soon, unfortunately. But I don’t think it would be outside a GPI economist’s remit, especially once we’ve grown.

Comment by trammell on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-17T23:03:33.321Z · score: 1 (1 votes) · EA · GW

Sorry--maybe I’m being blind, but I’m not seeing what citation you’d be referring to in that blog post. Where should I be looking?

Comment by trammell on The case of the missing cause prioritisation research · 2020-08-17T00:02:30.056Z · score: 74 (38 votes) · EA · GW

Thanks, I definitely agree that there should be more prioritization research. (I work at GPI, so maybe that’s predictable.) And I agree that for all the EA talk about how important it is, there's surprisingly little really being done.

One point I'd like to raise, though: I don’t know what you’re looking for exactly, but my impression is that good prioritization research will in general not resemble what EA people usually have in mind when they talk about “cause prioritization”. So when putting together an overview like this, one might overlook some of even what little prioritization research is being done.

In my experience, people usually imagine a process of explicitly listing causes, thinking through and evaluating the consequences of working in each of them, and then ranking the results (kind of like GiveWell does with global poverty charities). I expect that the main reason more of this doesn’t exist is that, when people try to start doing this, they typically conclude it isn’t actually the most helpful way to shed light on which cause EA actors should focus on.

I think that, more often than not, a more helpful way to go about prioritizing is to build a model of the world, just rich enough to represent all the levers between which you’re considering and the ways you expect them to interact, and then to see how much better the world gets when you divide your resources among the levers this way or that. By analogy, a “naïve” government’s approach to prioritizing between, say, increasing this year’s GDP and decreasing this year’s carbon emissions would be to try to account explicitly for the consequences of each and to compare them. Taking the lowering emissions side, this will produce a tangled web of positive and negative consequences, which interact heavily both with each other and with the consequences of increasing GDP: it will mean

  • less consumption this year,
  • less climate damage next year,
  • less accumulated capital next year with which to mitigate climate damage,
  • more of an incentive for people next year to allow more emissions,
  • more predictable weather and therefore easier production next year,
  • …but this might mean more (or less) emissions next year,
  • …and so on.

It quickly becomes clear that finishing the list and estimating all its items is hopeless. So what people do instead is write down an “integrated assessment model”. What the IAM is ultimately modeling, albeit in very low resolution, is the whole world, with governments, individuals, and various economic and environmental moving parts behaving in a way that straightforwardly gives rise to the web of interactions that would appear on that infinitely long list. Then, if you’re, say, a government in 2020, you just solve for the policy—the level of the carbon cap, the level of green energy subsidization, and whatever else the model allows you to consider—that maximizes your objective function, whatever that may be. What comes out of the model will be sensitive to the construction of the model, of course, and so may not be very informative. But I'd say it will be at least as informative as an attempt to do something that looks more like what people sometimes seem to mean by cause prioritization.

If the project of “writing down stylized models of the world and solving for the optimal thing for EAs to do in them” counts as cause prioritization, I’d say two projects I’ve had at least some hand in over the past year count: (at least sections 4 and 5.1 of) my own paper on patient philanthropy and (at least section 6.3 of) Leopold Aschenbrenner’s paper on existential risk and growth. Anyway, I don't mean to plug these projects in particular, I just want to make the case that they’re examples of a class of work that is being done to some extent and that should count as prioritization research.

…And examples of what GPI will hopefully soon be fostering more of, for whatever that’s worth! It’s all philosophy so far, I know, but my paper and Leo’s are going on the GPI website once they’re just a bit more polished. And we’ve just hired two econ postdocs I’m really excited about, so we’ll see what they come up with.

Comment by trammell on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-15T09:24:29.575Z · score: 7 (5 votes) · EA · GW

Hanson has advocated for investing for future giving, and I don't doubt he had this intuition in mind. But I'm actually not aware of any source in which he says that the condition under which zero-time-preference philanthropists should invest for future giving is that the interest rate incorporates beneficiaries' pure time preference. I only know that he's said that the relevant condition is when the interest rate is (a) positive or (b) higher than the growth rate. Do you have a particular source in mind?

Also, who made the "pure time preference in the interest rate means patient philanthropists should invest" point pre-Hanson? (Not trying to get credit for being the first to come up with this really basic idea, I just want to know whom to read/cite!)

Comment by trammell on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-15T09:15:40.788Z · score: 13 (6 votes) · EA · GW

That post just makes the claim that "all we really need are positive interest rates". My own point which you were referring to in the original comment is that, at least in the context of poverty alleviation (/increasing human consumption more generally), what we need is pure time preference incorporated into interest rates. This condition is neither necessary nor sufficient for positive interest rates.

Hanson's post then says something which sounds kind of like my point, namely that we can infer that it's better for us as philanthropists to invest than to spend if we see our beneficiaries doing some of both. But I could never figure out what he was saying exactly, or how it was compatible with the point he was trying to make that all we really need are positive interest rates.

Could you elaborate?

Comment by trammell on A List of EA Donation Pledges (GWWC, etc) · 2020-08-08T22:11:48.684Z · score: 4 (3 votes) · EA · GW

The GWWC Further Pledge

Comment by trammell on Utility Cascades · 2020-07-29T17:39:25.549Z · score: 5 (3 votes) · EA · GW

One Richard Chappell has a response here: https://www.philosophyetc.net/2020/03/no-utility-cascades.html

Comment by trammell on How Much Does New Research Inform Us About Existential Climate Risk? · 2020-07-23T06:51:38.548Z · score: 29 (15 votes) · EA · GW

In case the notation out of context isn’t clear to some forum readers: Sensitivity S is the extent to which the earth will warm given a doubling of CO2 in the atmosphere. K denotes degrees Kelvin, which have the same units as degrees Celsius.

Comment by trammell on Should I claim COVID-benefits I don't need to give to charity? · 2020-05-15T15:47:04.054Z · score: 17 (6 votes) · EA · GW

I don't know what counts as a core principle of EA exactly, but most people involved with EA are quite consequentialist.

Whatever you should in fact do here, you probably wouldn't find a public recommendation to be dishonest. On purely consequentialist grounds, after accounting for the value of the reputation of the EA community and so on, what community guidelines (and what EA Forum advice) do you think would be better to write: those that go out of their way to emphasize honesty or those that sound more consequentialist?

Comment by trammell on Existential Risk and Economic Growth · 2020-05-12T11:08:46.856Z · score: 1 (1 votes) · EA · GW

I'm just putting numbers to the previous sentence: "Say the current (instantaneous) hazard rate is 1% per century; my guess is that most of this consists of (instantaneous) risk imposed by existing stockpiles of nuclear weapons, existing climate instability, and so on, rather than (instantaneous) risk imposed by research currently ongoing."

If "most" means "80%" there, then halting growth would lower the hazard rate from 1% to 0.8%.

Comment by trammell on Existential Risk and Economic Growth · 2020-05-10T17:02:19.113Z · score: 2 (2 votes) · EA · GW

Hey, thanks for engaging with this, and sorry for not noticing your original comment for so many months. I agree that in reality the hazard rate at t depends not just on the level of output and safety measures maintained at t but also on "experiments that might go wrong" at t. The model is indeed a simplification in this way.

Just to make sure something's clear, though (and sorry if this was already clear): Toby's 20% hazard rate isn't the current hazard rate; it's the hazard rate this century, but most of that is due to developments he projects occurring later this century. Say the current (instantaneous) hazard rate is 1% per century; my guess is that most of this consists of (instantaneous) risk imposed by existing stockpiles of nuclear weapons, existing climate instability, and so on, rather than (instantaneous) risk imposed by research currently ongoing. So if stopping growth would lower the hazard rate, it would be a matter of moving from 1% to 0.8% or something, not from 20% to 1%.

Comment by trammell on How can I apply person-affecting views to Effective Altruism? · 2020-04-29T12:54:53.754Z · score: 9 (7 votes) · EA · GW

This paper is also relevant to the EA implications of a variety of person-affecting views. https://globalprioritiesinstitute.org/wp-content/uploads/2020/Teruji_Thomas_asymmetry_uncertainty.pdf

Comment by trammell on Phil Trammell: The case for ignoring the world’s current problems — or how becoming a ‘patient philanthropist’ could allow you to do far more good · 2020-04-16T22:14:55.273Z · score: 3 (2 votes) · EA · GW

Glad you liked it, and thanks for the good questions!

#1: I should definitely have spent more time on this / been more careful explaining it. Yes, x-risks should “feed straight into interest rates”, in the sense that a +1% chance of an x-risk per year should mean a 1% higher interest rate. So if you’re going to be

  • spending on something other than x-risk reduction; or
  • spending on x-risk reduction but only able to marginally lower the risk in the period you’re spending (i.e. not permanently lower the rate), and think that there will still be similar risk to mitigate in the next period conditional on survival,

then you should be roughly compensated for the risk. That is, under those circumstances, if investing seemed preferable to spending in the absence of the heightened risk, it should still seem that way given the heightened risk. This does all hold despite the fact that the heightened risk would give humanity such a short life expectancy.

But I totally grant that these assumptions may not hold, and that if they don’t, the heightened risk can be a reason to spend more! I just wanted to point out that there is this force pushing the other way that turns out to render the question at least ambiguous.

#2: No, there’s no reductio here. Once you get big enough, i.e. are no longer a marginal contributor to the public goods you’re looking to fund, the diminishing returns to spending make it less worthwhile to grow even bigger. (E.g., in the human consumption case, you’ll eventually be rich enough that spending the first half of your fund would make people richer to the point that spending the second half would do substantially less for them.) Once the gains from further investing fallen to the point that they just balance the (extinction / expropriation / etc) risks, you should start spending, and continue to split between spending and investment so as to stay permanently on the path where you’re indifferent between the two.

If you're looking to fund some narrow thing only one other person's interested in funding, and you're perfectly patient but the other person is about as impatient as people tend to be, and if you start out with funds the same size, I think you'll be big enough that it's worth starting to spend after about fifty years. If you're looking to spend on increasing human consumption in general, you'll have to hold out till you're a big fraction of global wealth--maybe on the order of a thousand years. (Note that this means that you'd probably never make it, even though this is still the expected-welfare-maximizing policy.)

#3: Yes. If ethics turns out to contain pure time preference after all, or we have sufficiently weak duties to future generations for some other reason, then patient philanthropy is a bad idea. :(

Comment by trammell on On Waiting to Invest · 2020-04-11T15:20:49.293Z · score: 1 (1 votes) · EA · GW

Glad you liked it!

In the model I'm working on, to try to weigh the main considerations, the goal is to maximize expected philanthropic impact, not to maximize expected returns. I do recommend spending more quickly than I would in a world where the goal were just to maximize expected returns. My tentative conclusion that long-term investing is a good idea already incorporates the conclusion that it will most likely just involve losing a lot of money.

That is, I argue that we're in a world where the highest-expected-impact strategy (not just the highest-expect-return strategy) is one with a low probability of having a lot of impact and a high probability of having very little impact.

Comment by trammell on If you value future people, why do you consider near term effects? · 2020-04-10T18:29:57.123Z · score: 1 (1 votes) · EA · GW

At the risk of repetition, I’d say that by the same reasoning, we could likewise add in our best estimates of saving a life on (just, say) total human welfare up to 2100.

Your response here was that “[p]opulation growth will be net good or bad depending on my credences about what the future would have looked like, but these credences are not robust”. But as with the first beneficiary, we can separate the direct welfare impact of population growth from all its other effects and observe that the former is a part of “sum u_i”, no?

Of course, estimates of shorter-term effects are usually more reliable than those of longer-term effects, for all sorts of reasons; but since we’re not arguing over whether saving lives in certain regions can be expected to increase population size up to 2100, that doesn’t seem to me like the point of dispute in this case.

I’m not sure where we’re failing to communicate exactly, but I’m a little worried that this is clogging the comments section! Let me know if you want to really try to get to the bottom of this sometime, in some other context.

Comment by trammell on On Waiting to Invest · 2020-04-10T16:46:08.808Z · score: 3 (2 votes) · EA · GW

Yup, no disagreement here. You're looking at what happens when we introduce uncertainty holding the absolute expected return constant, and I was discussing what happens when we introduce uncertainty holding the expected annual rate of return constant.

Comment by trammell on If you value future people, why do you consider near term effects? · 2020-04-10T09:04:17.681Z · score: 3 (2 votes) · EA · GW
If you give me a causal model, and claim A has a certain effect on B, without justifying rough effect sizes, I am by default skeptical of that claim and treat that like simple cluelessness: B conditional on changing A is identically distributed to B. You have not yet justified a systematic effect of A on B.

What I'm saying is, "Michael: you've given me a causal model, and claimed A (saving lives) has a positive effect on B (total moral value in the universe, given all the indirect effects), without justifying a rough effect size. You just justified a rough effect size on C (value to direct beneficiaries), but that's not ultimately what matters. By default I think A has no systematic effect on B, and you have not yet justified one."

Is this an example of CC?

Yes, you have CC in that circumstance if you don't have evidential symmetry with respect to X.

Comment by trammell on On Waiting to Invest · 2020-04-10T00:13:39.034Z · score: 9 (7 votes) · EA · GW

Hey, I know that episode : )

Thanks for these numbers. Yes: holding expected returns equal, our propensity to invest should be decreasing in volatility.

But symmetric uncertainty about the long-run average rate of return—or to a lesser extent, as in your example, time-independent symmetric uncertainty about short-run returns at every period—increases expected returns. (I think this is the point I made that you’re referring to.) This is just the converse of your observation that, to keep expected returns equal upon introducing volatility, we have to lower the long-run rate from r to q = r – s^2/2.

Whether these increased expected returns mean that patient philanthropists should invest more or less than they would under certainty is in principle sensitive to (a) the shape of the function from resources to philanthropic impact and (b) the behavior of other funders of the things we care about; but on balance, on the current margin, I’d argue it implies that patient philanthropists should invest more. I’ll try writing more on this at some point, and apologies if you would have liked a deeper discussion about this on the podcast.

Comment by trammell on If you value future people, why do you consider near term effects? · 2020-04-09T20:10:09.906Z · score: 1 (1 votes) · EA · GW
Population growth will be net good or bad depending on my credences about what the future would have looked like, but these credences are not robust.

Suppose for simplicity that we can split the effects of saving a life into

1) benefits accruing to the beneficiary;

2) benefits accruing to future generations up to 2100, through increased size (following from (1)); and

3) further effects (following from (2)).

It seems like you're saying that there's some proposition X such that (3) is overall good if X and bad if not-X, where we can only guess at the probability of X; and that in this circumstance we can say that the overall effect of (2 & 3) is ~zero in expectation.

If that's right, what I'm struggling to see is why we can't likewise say that there's some proposition Y such that (2 & 3) is overall good if Y and bad if not-Y, where we can only guess at the probability of Y, and that the overall effect of (1 & 2 & 3) is therefore ~zero in expectation.

Comment by trammell on If you value future people, why do you consider near term effects? · 2020-04-09T14:44:34.485Z · score: 1 (1 votes) · EA · GW
Is the point that I'm confident they're larger in magnitude, but still not confident enough to estimate their expected magnitudes more precisely?

Yes, exactly—that’s the point of the African population growth example.

Maybe I have a good idea of the impacts over each possible future, but I'm very uncertain about the distribution of possible futures. I could be confident about the sign of the effect of population growth when comparing pairs of counterfactuals, one with the child saved, and the other not, but I'm not confident enough to form distributions over the two sets of counterfactuals to be able to determine the sign of the expected value.

I don’t understand this paragraph. Could you clarify?

I don’t think I understand this either:

I'm doubting the signs of the effects that don't come with estimates. If I have a plausible argument that doing X affects Y and Y affects Z, which I value directly and the effect should be good, but I don't have an estimate for the effect through this causal path, I'm not actually convinced that the effect through this path isn't bad.

Say you have a plausible argument that pushing a switch (doing X) pulls some number n > 0 of strings (so Y := #strings_pulled goes from 0 to n), each of which releases some food to m > 0 hungry lab mice (so Z := #fed_mice goes from 0 to nm), and you know that X and Y have no other consequences. You know that n, m > 0 but don't have estimates for them. At face value you seem to be saying you’re not convinced that the effect of pushing the switch isn’t bad, but that can’t be right!

Comment by trammell on If you value future people, why do you consider near term effects? · 2020-04-08T23:29:48.084Z · score: 6 (4 votes) · EA · GW

No worries, sorry if I didn't write it as clearly as I could have!

BTW, I've had this conversation enough times now that last summer I wrote down my thoughts on cluelessness in a document that I've been told is pretty accessible—this is the doc I link to from the words "don't have an expected value". I know it can be annoying just to be pointed off the page, but just letting you know in case you find it helpful or interesting.

Comment by trammell on If you value future people, why do you consider near term effects? · 2020-04-08T23:15:10.876Z · score: 1 (1 votes) · EA · GW

Hold on—now it seems like you might be talking past the OP on the issue of complex cluelessness. I 1000% agree that changing population size has many effects beyond those I listed, and that we can't weigh them; but that's the whole problem!

The claim is that CC arises when (a) there are both predictably positive and predictably negative indirect effects of (say) saving lives which are larger in magnitude than the direct effects, and (b) you can't weigh them all against each other so as to arise at an all-things-considered judgment of the sign of the value of the intervention.

A common response to the phenomenon of CC is to say, "I know that the direct effects are good, and I struggle to weigh all of the indirect effects, so the latter are zero for me in expectation, and the intervention is appealing". But (unless there's a strong counterargument to Hilary's observation about this in "Cluelessness" which I'm unaware of), this response is invalid. We know this because if this response were valid, we could by identical reasoning pick out any category of effect whose effects we can estimate—the effect on farmed chicken welfare next year from saving a chicken-eater's life, say—and say "I know that the next-year-chicken effects are bad, and I struggle to weigh all of the non-next-year-chicken effects, so the latter are zero for me in expectation, and the intervention is unappealing".

The above reasoning doesn't invalidate that kind of response to simple cluelessness, because there the indirect effects have a feature—symmetry—which breaks when you cut up the space of consequences differently. But this means that, unless one can demonstrate that the distribution of non-direct effects has a sort of evidential symmetry that the distribution of non-next-year-chicken effects does not, one is not yet in a position to put a sign to the value of saving a life.

So, the response to

What's the expected value (on net) of the indirect effects to you? Is its absolute value much greater than the direct effects' expected value?

is that, given an inability to weigh all the effects, and an absence of evidential symmetry, I simply don't have an expected value (or even a sign) of the indirect effects, or the total effects, of saving a life.

Does that clarify things at all, or am I the one doing the talking-past?

Comment by trammell on If you value future people, why do you consider near term effects? · 2020-04-08T21:40:41.540Z · score: 6 (4 votes) · EA · GW

Agreed that, at least from a utilitarian perspective, identity effects aren't what matter and feel pretty symmetrical, and that they're therefore not the right way to illustrate complex cluelessness. But when you say

you need an example where you can justify that the outcome distributions are significantly different. I actually haven't been convinced that this is the case for any longtermist intervention

—maybe I'm misunderstanding you, but I believe the proposition being defended here is that the distribution of long-term welfare outcomes from a short-termist intervention differs substantially from the status quo distribution of long-term welfare outcomes (and that this distribution-difference is much larger than the intervention's direct benefits). Do you mean that you're not convinced that this is the case for any short-termist intervention?

Even though we don't know the magnitudes of today's interventions' long-term effects, I do think we can sometimes confidently say that the distribution-difference is larger than the direct effect. For instance, the UN's 95% confidence interval is that the population of Africa will multiply by about 3x to 5x by 2100 (here, p.7). One might think their confidence interval should be wider, but I don't see why the range would be upwards-biased in particular. Assuming that fertility in saved children isn't dramatically lower than population fertility, this strikes me as a strong reason to think that the indirect welfare effects of saving a young person's life in Africa today—indeed, even a majority of the effects on total human welfare before 2100—will be larger than the direct welfare effect.

Saving lives might lower fertility somewhat, thus offsetting this effect, But the (tentative) conclusion of what I believe is the only in-depth investigation on this is that there are some regions in which this offsetting is negligible. And note that if those UN projections are any guide, the fertility-lowering effect would have to be not just non-negligible but very close to complete for the direct welfare effects to outweigh the indirect.

Does that seem wrong to you?

Comment by trammell on Is Existential Risk a Useless Category? Could the Concept Be Dangerous? · 2020-04-02T07:36:59.050Z · score: 4 (2 votes) · EA · GW

No.

Comment by trammell on Is Existential Risk a Useless Category? Could the Concept Be Dangerous? · 2020-04-01T10:23:14.578Z · score: 17 (9 votes) · EA · GW

Yeah, agreed that using the white supremacist label needlessly poisons the discussion in both cases.

For whatever it’s worth, my own tentative guess would actually be that saving a life in the developing world contributes more to growth in the long run than saving a life in the developed world. Fertility in the former is much higher, and in the long run I expect growth and technological development to be increasing in global population size (at least over the ranges we can expect to see).

Maybe this is a bit off-topic, but I think it’s worth illustrating that there’s no sense in which the longtermist discussion about saving lives necessarily pushes in a so-called “white supremacist” direction.

Comment by trammell on Is Existential Risk a Useless Category? Could the Concept Be Dangerous? · 2020-03-31T21:46:58.482Z · score: 34 (15 votes) · EA · GW

Thanks for pointing that out!

For those who might worry that you're being hyperbolic, I'd say that the linked paper doesn't say that they are white supremacists. But it does claim that a major claim from Nick Beckstead's thesis is white supremacist. Here is the relevant quote, from pages 27-28:

"As he [Beckstead] makes the point,

>> saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive. By ordinary standards, at least by ordinary enlightened humanitarian standards, saving and improving lives in rich countries is about equally as important as saving and improving lives in poor countries, provided lives are improved by roughly comparable amounts. But it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.

This is overtly white-supremacist."

The document elsewhere clarifies that it is using the term white supremacism to refer to systems that reinforce white power, not only to explicit, conscious racism. But I agree that this is far enough from how most people use the terminology that it doesn't seem like a very helpful contribution to the discussion.

Comment by trammell on Why not give 90%? · 2020-03-23T14:58:32.045Z · score: 7 (7 votes) · EA · GW

I downvoted the comment because it's off-topic.

Comment by trammell on Phil Trammell: The case for ignoring the world’s current problems — or how becoming a ‘patient philanthropist’ could allow you to do far more good · 2020-03-18T21:11:25.263Z · score: 5 (4 votes) · EA · GW

Thanks!

I'm far from qualified to give career advice to people who are already full-time academics, but I suppose I'd say,

  • If you've just graduated and are looking for a post-doc opportunity, or are coming from outside academia and willing to move to Oxford, then apply to GPI.
  • If you're already an academic elsewhere, then get in touch, come to one of the workshops GPI holds at the end of each Oxford term, and try shifting your research in a GPR direction. (We put together such a long research agenda partly in the hope that lots of interested researchers elsewhere will find something in it that they can get excited about.)
  • If you're a senior enough academic that you could set up a respectable global priorities research center elsewhere, then definitely get in touch! That could turn out to be a great idea, especially if you're an economist at a higher-ranked department than Oxford's. Forethought--GPI's sister org, which funds GPR activity outside of Oxford--would be a place to apply for funding for a project along those lines.
Comment by trammell on Doing good is as good as it ever was · 2020-01-22T22:56:17.916Z · score: 8 (5 votes) · EA · GW

I don't know if there is lower community morale of the sort you describe--you're better positioned to have a sense of that than I am--but to the extent that there is, yes, it seems we disagree about whether to suspect that cluelessness would be a significant factor.

It would be interesting to include a pair of questions on the next EA survey about whether people feel more or less charitably motivated than last year, and, if less, why.

Comment by trammell on Doing good is as good as it ever was · 2020-01-19T14:54:37.988Z · score: 10 (6 votes) · EA · GW

If I'm not misunderstanding you, being less enthusiastic than before just requires (i) (if by "the long-termist thesis" we mean the moral claim that we should care about the long term) and (iii). I don't think that's a lot of requirements. Plus, this is all in a framework of precise expectations; you could also just think that the long-term effects are ambiguous enough to render the expected value undefined, and endorse a decision theory which penalizes this sort of ambiguity.

My guess is that when people start thinking about longtermism and get less excited about ordinary do-gooding, this is often at least in part due either to a belief in (iii) or, more commonly, to the realization of the ambiguity, even when this isn't articulated in detail. That seems likely to me (a) because, anecdotally, it seems relatively common for people raise concerns along these lines independently after thinking about this stuff for a while and (b) because there has been some push to believe in this ambiguity, namely all the writing on cluelessness. But of course that's just a guess.

Comment by trammell on Doing good is as good as it ever was · 2020-01-18T18:47:36.111Z · score: 42 (22 votes) · EA · GW

I disagree with the common framing that saving lives and so on constitute one straightforward, unambiguous way to do good, and that longtermism just constitutes or motivates some interventions with the potential to do even more good.

It seems to me (and I'm not alone, of course) that concern for the long term renders the sign of the value most of the classic EA interventions ambiguous. In any event, it renders the magnitude of their value more ambiguous than it is if one disregards flow-through effects of all kinds. If

  • accounting for long term consequences lowers the expected value (or whatever analog of expected value we use in the absence of precise expectations) of classic EA interventions, in someone's mind, and
  • she's not persuaded that any other interventions--or, any she can perform--offer as high (quasi-)expected value, all things considered, as the classic EA interventions offer after disregarding flow-through effects,

then I think it's reasonable for her to feel less happy about how much good she can do as she becomes more concerned about the long term.

For the record, I don't know how common this feeling is, or how often people feel more excited about their ability to save lives and so on than they did a few years ago. One could certainly think that saving lives, say, has even more long-term net positive effects than short-term positive effects. I just want to say that when someone says that they feel less excited about how much good they can do, and that longtermism has something to do with that, that could be justified. They might just be realizing that doing good isn't and never was as good as they thought it was.

Comment by trammell on Ramiro's Shortform · 2020-01-17T14:10:13.811Z · score: 5 (4 votes) · EA · GW

Yes, governments lower the SDR as the interest rate changes. See for example the US Council of Economic Advisers's recommendation on this three years ago: https://obamawhitehouse.archives.gov/sites/default/files/page/files/201701_cea_discounting_issue_brief.pdf

While the "risk-free" interest rate is roughly zero these days, the interest rate to use when discounting payoffs from a public project is the rate of return on investments whose risk profile is similar to that of the public project in question. This is still positive for basically any normal public project.

Comment by trammell on A collection of researchy projects for Aspiring EAs · 2019-12-02T15:43:46.371Z · score: 2 (2 votes) · EA · GW

Thanks, looks like a useful resource!

For some EA-motivated research project ideas in economics and philosophy, hopefully the GPI Research Agenda also serves as a useful resource.

(Edit: I see that the document links to Effective Thesis's list of research agendas, of which GPI's is one. Sorry for the redundancy.)

Comment by trammell on Existential Risk and Economic Growth · 2019-10-26T15:23:00.005Z · score: 11 (3 votes) · EA · GW

Still no summary of the paper as a whole, but if you're interested, I just wrote a really quick blog post which summarizes one takeaway. https://philiptrammell.com/blog/45

Comment by trammell on Are we living at the most influential time in history? · 2019-09-12T22:02:05.377Z · score: 8 (6 votes) · EA · GW

Interesting finds, thanks!

Similarly, people sometimes claim that we should discount our own intuitions of extreme historic importance because people often feel that way, but have so far (at least almost) always been wrong. And I’m a bit skeptical of the premise of this particular induction. On my cursory understanding of history, it’s likely that for most of history people saw themselves as part of a stagnant or cyclical process which no one could really change, and were right. But I don’t have any quotes on this, let alone stats. I’d love to know what proportion of people before ~1500 thought of themselves as living at a special time.

Comment by trammell on Does any thorough discussion of moral parliaments exist? · 2019-09-08T12:19:59.689Z · score: 1 (1 votes) · EA · GW

Ah cool, thanks

Comment by trammell on Does any thorough discussion of moral parliaments exist? · 2019-09-07T12:03:36.742Z · score: 10 (6 votes) · EA · GW

Yeah, you're not the only one noticing the gap. Hilary and Owen have a paper under review somewhere formalizing it a bit more (I see you've linked to some slides Hilary put together on it), so keep an eye out for that.

Comment by trammell on Are we living at the most influential time in history? · 2019-09-04T21:46:22.928Z · score: 1 (1 votes) · EA · GW

And that P(simulation) > 0.

Comment by trammell on Are we living at the most influential time in history? · 2019-09-04T08:51:49.354Z · score: 2 (2 votes) · EA · GW

Also, even if one could say P(simulation | seems like HoH) >> P(not-simulation | seems like HoH), that wouldn’t be decision relevant, since t could just be that P(simulation) >> P(not-simulation) in either case. What matters is which observation (seems like HoH or not) renders it more likely that the observer is being simulated.

Comment by trammell on Are we living at the most influential time in history? · 2019-09-04T08:45:25.832Z · score: 2 (2 votes) · EA · GW

We have no idea if simulations are even possible! We can’t just casually assert “P(seems like HoH | simulation) > P(seems like HoH | not simulation)”! All that we can reasonably speculate is that, if simulations are made, they’re more likely to be of special times than of boring times.

Comment by trammell on Existential Risk and Economic Growth · 2019-09-03T14:16:19.470Z · score: 27 (12 votes) · EA · GW

As the one who supervised him, I too think it's a super exciting and useful piece of research! :)

I also like that its setup suggests a number of relatively straightforward extensions for other people to work on. Three examples:

  • Comparing (1) the value of an increase to B (e.g. a philanthropist investing / subsidizing investment in safety research) and (2) the value of improved international coordination (moving to the "global impatient optimum" from a "decentralized allocation" of x-risk mitigation spending at, say, the country level) to (3) a shock to growth and (4) a shock to the "rate of pure time preference" on which society chooses to invest in safety technology. (The paper currently just compares (3) and (4).)
  • Seeing what happens when you replace the N^(epsilon - beta) term in the hazard function with population raised to a new exponent, say N^(mu), to allow for some risky activities and/or safety measures whose contribution to existential risk depends not on the total spent on them but on the amount per capita spent on them, or something in between.
  • Seeing what happens when you use a different growth model--in particular, one that doesn't depend on population growth.
Comment by trammell on Are we living at the most influential time in history? · 2019-09-03T12:31:57.308Z · score: 2 (2 votes) · EA · GW

Cool, thanks for getting all these ideas out there!

Possible correction: You write "P(simulation | seems like HoH ) >> P(not-simulation | seems like HoH)". Shouldn't the term on the right just be "P(simulation | doesn't seem like HoH)"?

Comment by trammell on Ask Me Anything! · 2019-08-25T10:01:30.704Z · score: 24 (16 votes) · EA · GW

Thank you, I'm flattered! But remember, all: Will MacAskill saying we have good arguments doesn't necessarily mean we have good arguments :)

Comment by trammell on Effective Thesis project review · 2019-03-06T19:18:15.586Z · score: 3 (2 votes) · EA · GW

Forethought just launched one a few hours ago!

Comment by trammell on An integrated model to evaluate the impact of animal products · 2019-01-10T18:52:53.999Z · score: 3 (7 votes) · EA · GW

Neither here nor there, but while we're counting possible biases, it may also be worth considering the possibilities that

  • people who conclude that farm animals' lives are good may select into farming, and people who conclude that they're bad may select out, making farmers "more optimistic than others" even before the self-serving bias; and, pointing the other way,
  • people who enter animal advocacy on grounds other than total utilitarianism could then have some bias against concluding that farm animals have lives above hedonic zero, since it could render their past moral efforts counterproductive (and maybe even kind of embarrassing).
Comment by trammell on An integrated model to evaluate the impact of animal products · 2019-01-09T13:01:22.988Z · score: 4 (4 votes) · EA · GW

Thanks so much for putting this together! I hadn't thought of the cross-price elasticity effects across types of animal products, but of course it's an important thing to incorporate.

Two extensions of this sort of analysis that I would be interested to see:

  • Are there any important cross-price elasticity effects between animal and non-animal (including non-food) products? For instance, if the worst type of meat is beef, as you estimate, then it could be good to buy products that use the same inputs as beef--a type of grain that grows best on the types of land suitable for cattle, say--because that will push up the price of beef and push people into less harmful meat products. (It makes sense that cross-price elasticity effects would tend to be largest within kinds of meat, but other products may still worth considering, if this hasn't already been done.)
  • Just as the substitution effects across kinds of meat are presumably stronger than between meat and other things, the effects are presumably strongest within brands of a particular animal product. That is, maybe buying (less in-)humanely raised chicken or environmentally (less un-)friendly beef pushes up the price of that product in general, which causes people to consume less of it, leading to an improvement overall, even though the purchased product itself still does net damage. How much would these within-product considerations change things?

Obviously there's no end to the possible extensions, until we have a complete model of the entire economy that lets us estimate the general equilibrium impact of switching from one product to another. But maybe there are a few more elasticities that would be relatively important and tractable to consider.