Posts

2-week summer course in "economic theory and global prioritization": LMK if interested! 2021-11-19T11:12:37.013Z
A Model of Patient Spending and Movement Building 2021-11-08T18:00:17.481Z
Which World Gets Saved 2018-11-09T18:08:24.632Z
RPTP Is a Strong Reason to Consider Giving Later 2018-10-01T16:27:22.590Z

Comments

Comment by trammell on US bill limiting patient philanthropy? · 2021-11-23T14:58:53.278Z · EA · GW

There are now questions on Metaculus about whether this will pass:

https://www.metaculus.com/questions/8663/us-to-make-patient-philanthropy-harder-soon/ 

https://www.metaculus.com/questions/8664/patient-philanthropy-harder-in-the-us-by-30/ 

Comment by trammell on 2-week summer course in "economic theory and global prioritization": LMK if interested! · 2021-11-22T01:18:55.526Z · EA · GW

I am, thanks

Comment by trammell on 2-week summer course in "economic theory and global prioritization": LMK if interested! · 2021-11-20T16:30:38.491Z · EA · GW

Cool! I was thinking that this course would be a sort of early-stage / first-pass attempt at a curriculum that could eventually generate a textbook (and/or other materials) if it goes well and is repeated a few times, just as so many other textbooks have begun as lecture notes. But if you'd be willing to make something online / easier-to-update sooner, that could be useful. The slides and so on won't be done for quite a while, but I'll send them to you when they are.

Comment by trammell on 2-week summer course in "economic theory and global prioritization": LMK if interested! · 2021-11-20T16:22:17.460Z · EA · GW

Yup, I'll post the syllabus and slides and so on!

I'll also probably record the lectures, but probably not make them available except to the attendees, so they feel more comfortable asking questions. But if a lecture goes well, I might later use it as a template for a more polished/accessible video that is publicly available. (Some of the topics already have good lectures for online as well, though; in those cases I'd probably just link to those.)

Comment by trammell on 2-week summer course in "economic theory and global prioritization": LMK if interested! · 2021-11-20T16:15:54.588Z · EA · GW

Glad to hear you might be interested!

Thanks for pointing this out. It's tough, because (a) as GrueEmerald notes below, at least some European schools end later, and (b) it will be easier to provide accommodation in Oxford once the Oxford spring term is over (e.g. I was thinking of just renting space in one of the colleges). Once the application form is up*, I might include a When2Meet-type thing so people can put exactly what weeks they expect to be free through the summer.

*If this goes ahead; but there have been a lot of expressions of interest so far, so it probably will!

Comment by trammell on 2-week summer course in "economic theory and global prioritization": LMK if interested! · 2021-11-20T11:09:53.322Z · EA · GW

Sure. Those particular papers rely on a mathematical trick that only lets you work out how much a society should be willing to pay to avoid proportional losses in consumption. It turns out to be different from what to do in the x-risk case in lots of important ways, and the trick is not generalizable in those ways. But because the papers seem so close to being x-risk-relevant, I know of like half a dozen EA econ students (including me) who have tried extending them at some point before giving up…

I’m aware of at least a few other “common EA econ theorist dead ends” of this sort, and I’ll try making a list, along something written about each of them. When this and the rest of the course material is done, I’ll post it.

Comment by trammell on 2-week summer course in "economic theory and global prioritization": LMK if interested! · 2021-11-19T14:09:57.453Z · EA · GW

Good to know, thanks!

Video recordings are among the "more polished and scalable educational materials" I was thinking might come out of this; i.e. to some extent the course lectures would serve as a trial run for any such videos. That wouldn't be for a year or so, I'm afraid. But if it happens, I'll make sure to get a good attached mike, and if I can't get my hands on one elsewhere I'll keep you in mind. : )

Comment by trammell on A Model of Patient Spending and Movement Building · 2021-11-11T02:02:52.886Z · EA · GW

Thanks! A lot of good points here.

Re 1: if I'm understanding you right, this would just lower the interest rate from r to r - capital 'depreciation rate'. So it wouldn't change any of the qualitative conclusions, except that it would make it more plausible that the EA movement (or any particular movement) is, for modeling purposes, "impatient". But cool, that's an important point. And particularly relevant these days; my understanding is that a lot of Will's(/etc) excitement around finding megaprojects ASAP is driven by the sense that if we don't, some of the money will wander off.

Re 2: another good point. In this case I just think it would make the big qualitative conclusion hold even more strongly--no need to earn to give because money is even easier to come by, relative to labor, than the model suggests. But maybe it would be worth working through it after adding an explicit "wealth recruitment" function, to make sure there are no surprises.

Re 3: I agree, but I suspect--perhaps pessimistically--that the asymptotics of this model (if it's roughly accurate at all) bite a long time before EA wealth is a large enough fraction of global capital to push down the interest rate! Indeed, I don't think it's crazy to think they're already biting. Presumably the thing to do if you actually got to that point would be to start allocating more resources to R&D, to raise labor productivity and thus the return to capital. There are many ways I'd want to make the model more realistic before worrying about the constraints you run into when you start owning continents (a scenario for which there would presumably be plenty of time to prepare...!); but as noted, one of the extensions I'm hoping gets done before too long is to make (at least certain kinds of) R&D endogenous. So hopefully that would be at least somewhat relevant.

Comment by trammell on A Model of Patient Spending and Movement Building · 2021-11-11T01:29:45.104Z · EA · GW

Thanks! I agree that this might be another pretty important consideration, though I'd want to think a bit about how to model it in a way that feels relatively realistic and non-arbitrary.

E.g. maybe we should say people start out with a prior on the effectiveness of a movement at getting good things done, and instead of just being deterministically "recruited", they decide whether to contribute their labor and/or capital to a movement partly on the basis of their evaluation of its effectiveness, after updating on the basis of its track record.

Comment by trammell on A Model of Patient Spending and Movement Building · 2021-11-08T22:18:37.825Z · EA · GW

Good point, thanks!

Comment by trammell on Could EA be ideas constrained? · 2021-11-08T22:02:01.687Z · EA · GW

Good question! Yes, an ideas constraint absolutely could make sense.

My current favorite way to capture that possibility would be to model funding opportunities like consumer products as I do here. Pouring more capital and labor into existing funding opportunities might just bring you to an upper bound of impact, whereas thinking of new funding opportunities would raise the upper bound.

This is also one of the extensions I'm hoping to add to this model before too long. If you or anyone else reading this would be interested in working on that, especially if you have an econ background, let me know!

Comment by trammell on New Articles on Utilitarianism.net: Population Ethics and Theories of Well-Being · 2021-08-23T08:15:33.681Z · EA · GW

Thanks!

Comment by trammell on New Articles on Utilitarianism.net: Population Ethics and Theories of Well-Being · 2021-08-20T17:28:57.692Z · EA · GW

Nice to see this coming along! How many visitors has utilitarianism.net been getting?

Comment by trammell on Retention in EA - Part I: Survey Data · 2021-02-09T13:22:29.958Z · EA · GW

Thanks!

Comment by trammell on Retention in EA - Part I: Survey Data · 2021-02-06T09:53:03.800Z · EA · GW

Sorry, what’s REI work?

Comment by trammell on A Model of Value Drift · 2021-01-19T20:07:13.335Z · EA · GW

I think this is a valuable contribution—thanks for writing it! Among other things, it demonstrates that conclusions about when to give are highly sensitive to how we model value drift.

In my own work on the timing of giving, I’ve been thinking about value drift as a simple increase to the discount rate: each year philanthropists (or their heirs) face some x% chance of running off with the money and spending it on worthless things. So if the discount rate would have been d% without any value drift risk, it just rises to (d+x)% given the value drift risk. If the learning that will take place over the next year (and other reasons to wait, e.g. a positive interest rate) outweigh this (d+x)% (plus the other reasons why resources will be less valuable next year), it’s better to wait. But here we see that, if values definitely change a little each year, it might be best to spend much more quickly than if (as I’ve been assuming) they probably don’t change at all but might change a lot, since in the former case, holding onto resources allows for a kind of slippery slope in which each year you change your judgments about whether or not to defer to the next year. So I’m really glad this was written and I look forward to thinking about it more.

One comment on the thesis itself: I think it’s a bit confusing at the beginning, where it says that decision-makers face a tradeoff between “what is objectively known about the world and what they personally believe is true.” The tradeoff they face is between acquiring information and maintaining fidelity to their current preferences, not to their current beliefs. The rest of the thesis is consistent with framing the problem as a information-vs.-preference-fidelity tradeoff, so I think this wording is just a holdover from a previous version of the thesis which framed things differently. But (Max) let me know if I’m missing something.

Comment by trammell on "Patient vs urgent longtermism" has little direct bearing on giving now vs later · 2020-12-09T18:31:51.940Z · EA · GW

Sorry, no, that's clear! I should have noted that you say that too.

The point I wanted to make is that your reason for saving as an urgent longtermist isn't necessarily something like "we're already making use of all these urgent opportunities now, so might as well build up a buffer in case the money is gone later". You could just think that now isn't a particularly promising time to spend, period, but that there will be promising opportunities later this century, and still be classified as an urgent longtermist.

That is, an urgent longtermist could have stereotypically "patient longtermist" beliefs about the quality of direct-impact spending opportunities available in December 2020.

Comment by trammell on "Patient vs urgent longtermism" has little direct bearing on giving now vs later · 2020-12-09T15:18:20.694Z · EA · GW

Thanks! I was going to write an EA Forum post at some point also trying to clarify the relationship between the debate over "patient vs urgent longtermism" and the debate over giving now vs later, and I agree that it's not as straightforward as people sometimes think.

On the one hand, as you point out, one could be a "patient longtermist" but still think that there are capacity-building sorts of spending opportunities worth funding now.

But I'd also argue that, if urgent longtermism is defined roughly as the view that there will be critical junctures in the next few decades, as you put it, then an urgent longtermist could still think it's worth investing now, so that more money will be spent near those junctures in a few decades. Investing to give in, say, thirty years is still pretty unusual behavior, at least for small donors, but totally compatible with "urgent longtermism" / "hinge of history"-type views as they're usually defined.

Comment by trammell on 'Existential Risk and Growth' Deep Dive #2 - A Critical Look at Model Conclusions · 2020-08-25T10:43:07.776Z · EA · GW

Sure, I see how making people more patient has more-or-less symmetric effects on risks from arms race scenarios. But this is essentially separate from the global public goods issue, which you also seem to consider important (if I'm understanding your original point about "even the largest nation-states being only a small fraction of the world"), which is in turn separate from the intergenerational public goods issue (which was at the top of my own list).

I was putting arms race dynamics lower than the other two on my list of likely reasons for existential catastrophe. E.g. runaway climate change worries me a bit more than nuclear war; and mundane, profit-motivated tolerance for mistakes in AI or biotech (both within firms and at the regulatory level) worry me a bit more than the prospect of technological arms races.

That's not a very firm belief on my part--I could easily be convinced that arms races should rank higher than the mundane, profit-motivated carelessness. But I'd be surprised if the latter were approximately none of the problem.

Comment by trammell on 'Existential Risk and Growth' Deep Dive #2 - A Critical Look at Model Conclusions · 2020-08-24T09:14:33.056Z · EA · GW

I agree that the world underinvests in x-risk reduction (/overspends on activities that increase x-risk as a side effect) for all kinds of reasons. My impression would be that the two most important reasons for the underinvestment are that existential safety is a public good on two fronts:

  • long-term (but people just care about the short term, and coordination with future generations is impossible), and
  • global (but governments just care about their own countries, and we don't do global coordination well).

So I definitely agree that it's important that there are many actors in the world who aren't coordinating well, and that accounting for this would be an important next step.

But my intuition is that the first point is substantially more important than the second, and so the model assumes away much but not close to all of the problem. If the US cared about the rest of the world equally, that would multiply its willingness to pay for an increment of x-risk mitigation by maybe an order of magnitude. But if it had zero pure time preference but still just cared about what happened within its borders (or something), that would seem to multiply the WTP by many orders of magnitude.

Comment by trammell on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-21T23:56:38.253Z · EA · GW

Thanks! No need to inflict another recording of my voice on the world for now, I think, but glad to hear you like how the project coming.

Comment by trammell on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-18T23:00:53.585Z · EA · GW

The post cites the Stern discussion to make the point that (non-discounted) utilitarian policymakers would implement more investment, but to my mind that’s quite different from the point that absent cosmically exceptional short-term impact the patient longtermist consequentialist would save. Utilitarian policymakers might implement more redistribution too. Given policymakers as they are, we’re still left with the question of how utilitarian philanthropists with their fixed budgets should prioritize between filling the redistribution gap and filling the investment gap.

In any event, if you/Owen have any more unpublished pre-2015 insights from private correspondence, please consider posting them, so those of us who weren’t there don’t have to go through the bother of rediscovering them. : )

Comment by trammell on The case of the missing cause prioritisation research · 2020-08-17T23:14:35.016Z · EA · GW

Thanks! I agree that people in EA—including Christian, Leopold, and myself—have done a fair bit of theory/modeling work at this point which would benefit from relevant empirical work. I don’t think this is what either of the current new economists will engage in anytime soon, unfortunately. But I don’t think it would be outside a GPI economist’s remit, especially once we’ve grown.

Comment by trammell on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-17T23:03:33.321Z · EA · GW

Sorry--maybe I’m being blind, but I’m not seeing what citation you’d be referring to in that blog post. Where should I be looking?

Comment by trammell on The case of the missing cause prioritisation research · 2020-08-17T00:02:30.056Z · EA · GW

Thanks, I definitely agree that there should be more prioritization research. (I work at GPI, so maybe that’s predictable.) And I agree that for all the EA talk about how important it is, there's surprisingly little really being done.

One point I'd like to raise, though: I don’t know what you’re looking for exactly, but my impression is that good prioritization research will in general not resemble what EA people usually have in mind when they talk about “cause prioritization”. So when putting together an overview like this, one might overlook some of even what little prioritization research is being done.

In my experience, people usually imagine a process of explicitly listing causes, thinking through and evaluating the consequences of working in each of them, and then ranking the results (kind of like GiveWell does with global poverty charities). I expect that the main reason more of this doesn’t exist is that, when people try to start doing this, they typically conclude it isn’t actually the most helpful way to shed light on which cause EA actors should focus on.

I think that, more often than not, a more helpful way to go about prioritizing is to build a model of the world, just rich enough to represent all the levers between which you’re considering and the ways you expect them to interact, and then to see how much better the world gets when you divide your resources among the levers this way or that. By analogy, a “naïve” government’s approach to prioritizing between, say, increasing this year’s GDP and decreasing this year’s carbon emissions would be to try to account explicitly for the consequences of each and to compare them. Taking the lowering emissions side, this will produce a tangled web of positive and negative consequences, which interact heavily both with each other and with the consequences of increasing GDP: it will mean

  • less consumption this year,
  • less climate damage next year,
  • less accumulated capital next year with which to mitigate climate damage,
  • more of an incentive for people next year to allow more emissions,
  • more predictable weather and therefore easier production next year,
  • …but this might mean more (or less) emissions next year,
  • …and so on.

It quickly becomes clear that finishing the list and estimating all its items is hopeless. So what people do instead is write down an “integrated assessment model”. What the IAM is ultimately modeling, albeit in very low resolution, is the whole world, with governments, individuals, and various economic and environmental moving parts behaving in a way that straightforwardly gives rise to the web of interactions that would appear on that infinitely long list. Then, if you’re, say, a government in 2020, you just solve for the policy—the level of the carbon cap, the level of green energy subsidization, and whatever else the model allows you to consider—that maximizes your objective function, whatever that may be. What comes out of the model will be sensitive to the construction of the model, of course, and so may not be very informative. But I'd say it will be at least as informative as an attempt to do something that looks more like what people sometimes seem to mean by cause prioritization.

If the project of “writing down stylized models of the world and solving for the optimal thing for EAs to do in them” counts as cause prioritization, I’d say two projects I’ve had at least some hand in over the past year count: (at least sections 4 and 5.1 of) my own paper on patient philanthropy and (at least section 6.3 of) Leopold Aschenbrenner’s paper on existential risk and growth. Anyway, I don't mean to plug these projects in particular, I just want to make the case that they’re examples of a class of work that is being done to some extent and that should count as prioritization research.

…And examples of what GPI will hopefully soon be fostering more of, for whatever that’s worth! It’s all philosophy so far, I know, but my paper and Leo’s are going on the GPI website once they’re just a bit more polished. And we’ve just hired two econ postdocs I’m really excited about, so we’ll see what they come up with.

Comment by trammell on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-15T09:24:29.575Z · EA · GW

Hanson has advocated for investing for future giving, and I don't doubt he had this intuition in mind. But I'm actually not aware of any source in which he says that the condition under which zero-time-preference philanthropists should invest for future giving is that the interest rate incorporates beneficiaries' pure time preference. I only know that he's said that the relevant condition is when the interest rate is (a) positive or (b) higher than the growth rate. Do you have a particular source in mind?

Also, who made the "pure time preference in the interest rate means patient philanthropists should invest" point pre-Hanson? (Not trying to get credit for being the first to come up with this really basic idea, I just want to know whom to read/cite!)

Comment by trammell on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-15T09:15:40.788Z · EA · GW

That post just makes the claim that "all we really need are positive interest rates". My own point which you were referring to in the original comment is that, at least in the context of poverty alleviation (/increasing human consumption more generally), what we need is pure time preference incorporated into interest rates. This condition is neither necessary nor sufficient for positive interest rates.

Hanson's post then says something which sounds kind of like my point, namely that we can infer that it's better for us as philanthropists to invest than to spend if we see our beneficiaries doing some of both. But I could never figure out what he was saying exactly, or how it was compatible with the point he was trying to make that all we really need are positive interest rates.

Could you elaborate?

Comment by trammell on A List of EA Donation Pledges (GWWC, etc) · 2020-08-08T22:11:48.684Z · EA · GW

The GWWC Further Pledge

Comment by trammell on Utility Cascades · 2020-07-29T17:39:25.549Z · EA · GW

One Richard Chappell has a response here: https://www.philosophyetc.net/2020/03/no-utility-cascades.html

Comment by trammell on How Much Does New Research Inform Us About Existential Climate Risk? · 2020-07-23T06:51:38.548Z · EA · GW

In case the notation out of context isn’t clear to some forum readers: Sensitivity S is the extent to which the earth will warm given a doubling of CO2 in the atmosphere. K denotes degrees Kelvin, which have the same units as degrees Celsius.

Comment by trammell on Should I claim COVID-benefits I don't need to give to charity? · 2020-05-15T15:47:04.054Z · EA · GW

I don't know what counts as a core principle of EA exactly, but most people involved with EA are quite consequentialist.

Whatever you should in fact do here, you probably wouldn't find a public recommendation to be dishonest. On purely consequentialist grounds, after accounting for the value of the reputation of the EA community and so on, what community guidelines (and what EA Forum advice) do you think would be better to write: those that go out of their way to emphasize honesty or those that sound more consequentialist?

Comment by trammell on Existential Risk and Economic Growth · 2020-05-12T11:08:46.856Z · EA · GW

I'm just putting numbers to the previous sentence: "Say the current (instantaneous) hazard rate is 1% per century; my guess is that most of this consists of (instantaneous) risk imposed by existing stockpiles of nuclear weapons, existing climate instability, and so on, rather than (instantaneous) risk imposed by research currently ongoing."

If "most" means "80%" there, then halting growth would lower the hazard rate from 1% to 0.8%.

Comment by trammell on Existential Risk and Economic Growth · 2020-05-10T17:02:19.113Z · EA · GW

Hey, thanks for engaging with this, and sorry for not noticing your original comment for so many months. I agree that in reality the hazard rate at t depends not just on the level of output and safety measures maintained at t but also on "experiments that might go wrong" at t. The model is indeed a simplification in this way.

Just to make sure something's clear, though (and sorry if this was already clear): Toby's 20% hazard rate isn't the current hazard rate; it's the hazard rate this century, but most of that is due to developments he projects occurring later this century. Say the current (instantaneous) hazard rate is 1% per century; my guess is that most of this consists of (instantaneous) risk imposed by existing stockpiles of nuclear weapons, existing climate instability, and so on, rather than (instantaneous) risk imposed by research currently ongoing. So if stopping growth would lower the hazard rate, it would be a matter of moving from 1% to 0.8% or something, not from 20% to 1%.

Comment by trammell on How can I apply person-affecting views to Effective Altruism? · 2020-04-29T12:54:53.754Z · EA · GW

This paper is also relevant to the EA implications of a variety of person-affecting views. https://globalprioritiesinstitute.org/wp-content/uploads/2020/Teruji_Thomas_asymmetry_uncertainty.pdf

Comment by trammell on Phil Trammell: The case for ignoring the world’s current problems — or how becoming a ‘patient philanthropist’ could allow you to do far more good · 2020-04-16T22:14:55.273Z · EA · GW

Glad you liked it, and thanks for the good questions!

#1: I should definitely have spent more time on this / been more careful explaining it. Yes, x-risks should “feed straight into interest rates”, in the sense that a +1% chance of an x-risk per year should mean a 1% higher interest rate. So if you’re going to be

  • spending on something other than x-risk reduction; or
  • spending on x-risk reduction but only able to marginally lower the risk in the period you’re spending (i.e. not permanently lower the rate), and think that there will still be similar risk to mitigate in the next period conditional on survival,

then you should be roughly compensated for the risk. That is, under those circumstances, if investing seemed preferable to spending in the absence of the heightened risk, it should still seem that way given the heightened risk. This does all hold despite the fact that the heightened risk would give humanity such a short life expectancy.

But I totally grant that these assumptions may not hold, and that if they don’t, the heightened risk can be a reason to spend more! I just wanted to point out that there is this force pushing the other way that turns out to render the question at least ambiguous.

#2: No, there’s no reductio here. Once you get big enough, i.e. are no longer a marginal contributor to the public goods you’re looking to fund, the diminishing returns to spending make it less worthwhile to grow even bigger. (E.g., in the human consumption case, you’ll eventually be rich enough that spending the first half of your fund would make people richer to the point that spending the second half would do substantially less for them.) Once the gains from further investing fallen to the point that they just balance the (extinction / expropriation / etc) risks, you should start spending, and continue to split between spending and investment so as to stay permanently on the path where you’re indifferent between the two.

If you're looking to fund some narrow thing only one other person's interested in funding, and you're perfectly patient but the other person is about as impatient as people tend to be, and if you start out with funds the same size, I think you'll be big enough that it's worth starting to spend after about fifty years. If you're looking to spend on increasing human consumption in general, you'll have to hold out till you're a big fraction of global wealth--maybe on the order of a thousand years. (Note that this means that you'd probably never make it, even though this is still the expected-welfare-maximizing policy.)

#3: Yes. If ethics turns out to contain pure time preference after all, or we have sufficiently weak duties to future generations for some other reason, then patient philanthropy is a bad idea. :(

Comment by trammell on On Waiting to Invest · 2020-04-11T15:20:49.293Z · EA · GW

Glad you liked it!

In the model I'm working on, to try to weigh the main considerations, the goal is to maximize expected philanthropic impact, not to maximize expected returns. I do recommend spending more quickly than I would in a world where the goal were just to maximize expected returns. My tentative conclusion that long-term investing is a good idea already incorporates the conclusion that it will most likely just involve losing a lot of money.

That is, I argue that we're in a world where the highest-expected-impact strategy (not just the highest-expect-return strategy) is one with a low probability of having a lot of impact and a high probability of having very little impact.

Comment by trammell on If you value future people, why do you consider near term effects? · 2020-04-10T18:29:57.123Z · EA · GW

At the risk of repetition, I’d say that by the same reasoning, we could likewise add in our best estimates of saving a life on (just, say) total human welfare up to 2100.

Your response here was that “[p]opulation growth will be net good or bad depending on my credences about what the future would have looked like, but these credences are not robust”. But as with the first beneficiary, we can separate the direct welfare impact of population growth from all its other effects and observe that the former is a part of “sum u_i”, no?

Of course, estimates of shorter-term effects are usually more reliable than those of longer-term effects, for all sorts of reasons; but since we’re not arguing over whether saving lives in certain regions can be expected to increase population size up to 2100, that doesn’t seem to me like the point of dispute in this case.

I’m not sure where we’re failing to communicate exactly, but I’m a little worried that this is clogging the comments section! Let me know if you want to really try to get to the bottom of this sometime, in some other context.

Comment by trammell on On Waiting to Invest · 2020-04-10T16:46:08.808Z · EA · GW

Yup, no disagreement here. You're looking at what happens when we introduce uncertainty holding the absolute expected return constant, and I was discussing what happens when we introduce uncertainty holding the expected annual rate of return constant.

Comment by trammell on If you value future people, why do you consider near term effects? · 2020-04-10T09:04:17.681Z · EA · GW
If you give me a causal model, and claim A has a certain effect on B, without justifying rough effect sizes, I am by default skeptical of that claim and treat that like simple cluelessness: B conditional on changing A is identically distributed to B. You have not yet justified a systematic effect of A on B.

What I'm saying is, "Michael: you've given me a causal model, and claimed A (saving lives) has a positive effect on B (total moral value in the universe, given all the indirect effects), without justifying a rough effect size. You just justified a rough effect size on C (value to direct beneficiaries), but that's not ultimately what matters. By default I think A has no systematic effect on B, and you have not yet justified one."

Is this an example of CC?

Yes, you have CC in that circumstance if you don't have evidential symmetry with respect to X.

Comment by trammell on On Waiting to Invest · 2020-04-10T00:13:39.034Z · EA · GW

Hey, I know that episode : )

Thanks for these numbers. Yes: holding expected returns equal, our propensity to invest should be decreasing in volatility.

But symmetric uncertainty about the long-run average rate of return—or to a lesser extent, as in your example, time-independent symmetric uncertainty about short-run returns at every period—increases expected returns. (I think this is the point I made that you’re referring to.) This is just the converse of your observation that, to keep expected returns equal upon introducing volatility, we have to lower the long-run rate from r to q = r – s^2/2.

Whether these increased expected returns mean that patient philanthropists should invest more or less than they would under certainty is in principle sensitive to (a) the shape of the function from resources to philanthropic impact and (b) the behavior of other funders of the things we care about; but on balance, on the current margin, I’d argue it implies that patient philanthropists should invest more. I’ll try writing more on this at some point, and apologies if you would have liked a deeper discussion about this on the podcast.

Comment by trammell on If you value future people, why do you consider near term effects? · 2020-04-09T20:10:09.906Z · EA · GW
Population growth will be net good or bad depending on my credences about what the future would have looked like, but these credences are not robust.

Suppose for simplicity that we can split the effects of saving a life into

1) benefits accruing to the beneficiary;

2) benefits accruing to future generations up to 2100, through increased size (following from (1)); and

3) further effects (following from (2)).

It seems like you're saying that there's some proposition X such that (3) is overall good if X and bad if not-X, where we can only guess at the probability of X; and that in this circumstance we can say that the overall effect of (2 & 3) is ~zero in expectation.

If that's right, what I'm struggling to see is why we can't likewise say that there's some proposition Y such that (2 & 3) is overall good if Y and bad if not-Y, where we can only guess at the probability of Y, and that the overall effect of (1 & 2 & 3) is therefore ~zero in expectation.

Comment by trammell on If you value future people, why do you consider near term effects? · 2020-04-09T14:44:34.485Z · EA · GW
Is the point that I'm confident they're larger in magnitude, but still not confident enough to estimate their expected magnitudes more precisely?

Yes, exactly—that’s the point of the African population growth example.

Maybe I have a good idea of the impacts over each possible future, but I'm very uncertain about the distribution of possible futures. I could be confident about the sign of the effect of population growth when comparing pairs of counterfactuals, one with the child saved, and the other not, but I'm not confident enough to form distributions over the two sets of counterfactuals to be able to determine the sign of the expected value.

I don’t understand this paragraph. Could you clarify?

I don’t think I understand this either:

I'm doubting the signs of the effects that don't come with estimates. If I have a plausible argument that doing X affects Y and Y affects Z, which I value directly and the effect should be good, but I don't have an estimate for the effect through this causal path, I'm not actually convinced that the effect through this path isn't bad.

Say you have a plausible argument that pushing a switch (doing X) pulls some number n > 0 of strings (so Y := #strings_pulled goes from 0 to n), each of which releases some food to m > 0 hungry lab mice (so Z := #fed_mice goes from 0 to nm), and you know that X and Y have no other consequences. You know that n, m > 0 but don't have estimates for them. At face value you seem to be saying you’re not convinced that the effect of pushing the switch isn’t bad, but that can’t be right!

Comment by trammell on If you value future people, why do you consider near term effects? · 2020-04-08T23:29:48.084Z · EA · GW

No worries, sorry if I didn't write it as clearly as I could have!

BTW, I've had this conversation enough times now that last summer I wrote down my thoughts on cluelessness in a document that I've been told is pretty accessible—this is the doc I link to from the words "don't have an expected value". I know it can be annoying just to be pointed off the page, but just letting you know in case you find it helpful or interesting.

Comment by trammell on If you value future people, why do you consider near term effects? · 2020-04-08T23:15:10.876Z · EA · GW

Hold on—now it seems like you might be talking past the OP on the issue of complex cluelessness. I 1000% agree that changing population size has many effects beyond those I listed, and that we can't weigh them; but that's the whole problem!

The claim is that CC arises when (a) there are both predictably positive and predictably negative indirect effects of (say) saving lives which are larger in magnitude than the direct effects, and (b) you can't weigh them all against each other so as to arise at an all-things-considered judgment of the sign of the value of the intervention.

A common response to the phenomenon of CC is to say, "I know that the direct effects are good, and I struggle to weigh all of the indirect effects, so the latter are zero for me in expectation, and the intervention is appealing". But (unless there's a strong counterargument to Hilary's observation about this in "Cluelessness" which I'm unaware of), this response is invalid. We know this because if this response were valid, we could by identical reasoning pick out any category of effect whose effects we can estimate—the effect on farmed chicken welfare next year from saving a chicken-eater's life, say—and say "I know that the next-year-chicken effects are bad, and I struggle to weigh all of the non-next-year-chicken effects, so the latter are zero for me in expectation, and the intervention is unappealing".

The above reasoning doesn't invalidate that kind of response to simple cluelessness, because there the indirect effects have a feature—symmetry—which breaks when you cut up the space of consequences differently. But this means that, unless one can demonstrate that the distribution of non-direct effects has a sort of evidential symmetry that the distribution of non-next-year-chicken effects does not, one is not yet in a position to put a sign to the value of saving a life.

So, the response to

What's the expected value (on net) of the indirect effects to you? Is its absolute value much greater than the direct effects' expected value?

is that, given an inability to weigh all the effects, and an absence of evidential symmetry, I simply don't have an expected value (or even a sign) of the indirect effects, or the total effects, of saving a life.

Does that clarify things at all, or am I the one doing the talking-past?

Comment by trammell on If you value future people, why do you consider near term effects? · 2020-04-08T21:40:41.540Z · EA · GW

Agreed that, at least from a utilitarian perspective, identity effects aren't what matter and feel pretty symmetrical, and that they're therefore not the right way to illustrate complex cluelessness. But when you say

you need an example where you can justify that the outcome distributions are significantly different. I actually haven't been convinced that this is the case for any longtermist intervention

—maybe I'm misunderstanding you, but I believe the proposition being defended here is that the distribution of long-term welfare outcomes from a short-termist intervention differs substantially from the status quo distribution of long-term welfare outcomes (and that this distribution-difference is much larger than the intervention's direct benefits). Do you mean that you're not convinced that this is the case for any short-termist intervention?

Even though we don't know the magnitudes of today's interventions' long-term effects, I do think we can sometimes confidently say that the distribution-difference is larger than the direct effect. For instance, the UN's 95% confidence interval is that the population of Africa will multiply by about 3x to 5x by 2100 (here, p.7). One might think their confidence interval should be wider, but I don't see why the range would be upwards-biased in particular. Assuming that fertility in saved children isn't dramatically lower than population fertility, this strikes me as a strong reason to think that the indirect welfare effects of saving a young person's life in Africa today—indeed, even a majority of the effects on total human welfare before 2100—will be larger than the direct welfare effect.

Saving lives might lower fertility somewhat, thus offsetting this effect, But the (tentative) conclusion of what I believe is the only in-depth investigation on this is that there are some regions in which this offsetting is negligible. And note that if those UN projections are any guide, the fertility-lowering effect would have to be not just non-negligible but very close to complete for the direct welfare effects to outweigh the indirect.

Does that seem wrong to you?

Comment by trammell on [deleted post] 2020-04-02T07:36:59.050Z

No.

Comment by trammell on [deleted post] 2020-04-01T10:23:14.578Z

Yeah, agreed that using the white supremacist label needlessly poisons the discussion in both cases.

For whatever it’s worth, my own tentative guess would actually be that saving a life in the developing world contributes more to growth in the long run than saving a life in the developed world. Fertility in the former is much higher, and in the long run I expect growth and technological development to be increasing in global population size (at least over the ranges we can expect to see).

Maybe this is a bit off-topic, but I think it’s worth illustrating that there’s no sense in which the longtermist discussion about saving lives necessarily pushes in a so-called “white supremacist” direction.

Comment by trammell on [deleted post] 2020-03-31T21:46:58.482Z

Thanks for pointing that out!

For those who might worry that you're being hyperbolic, I'd say that the linked paper doesn't say that they are white supremacists. But it does claim that a major claim from Nick Beckstead's thesis is white supremacist. Here is the relevant quote, from pages 27-28:

"As he [Beckstead] makes the point,

>> saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive. By ordinary standards, at least by ordinary enlightened humanitarian standards, saving and improving lives in rich countries is about equally as important as saving and improving lives in poor countries, provided lives are improved by roughly comparable amounts. But it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.

This is overtly white-supremacist."

The document elsewhere clarifies that it is using the term white supremacism to refer to systems that reinforce white power, not only to explicit, conscious racism. But I agree that this is far enough from how most people use the terminology that it doesn't seem like a very helpful contribution to the discussion.

Comment by trammell on Why not give 90%? · 2020-03-23T14:58:32.045Z · EA · GW

I downvoted the comment because it's off-topic.

Comment by trammell on Phil Trammell: The case for ignoring the world’s current problems — or how becoming a ‘patient philanthropist’ could allow you to do far more good · 2020-03-18T21:11:25.263Z · EA · GW

Thanks!

I'm far from qualified to give career advice to people who are already full-time academics, but I suppose I'd say,

  • If you've just graduated and are looking for a post-doc opportunity, or are coming from outside academia and willing to move to Oxford, then apply to GPI.
  • If you're already an academic elsewhere, then get in touch, come to one of the workshops GPI holds at the end of each Oxford term, and try shifting your research in a GPR direction. (We put together such a long research agenda partly in the hope that lots of interested researchers elsewhere will find something in it that they can get excited about.)
  • If you're a senior enough academic that you could set up a respectable global priorities research center elsewhere, then definitely get in touch! That could turn out to be a great idea, especially if you're an economist at a higher-ranked department than Oxford's. Forethought--GPI's sister org, which funds GPR activity outside of Oxford--would be a place to apply for funding for a project along those lines.