Towards a Weaker Longtermism

post by Davidmanheim · 2021-08-08T08:31:03.727Z · EA · GW · 73 comments

Contents

  Philosophical grounding
  Does 'regular longtermism' say anything?
  What now?
None
73 comments

A key (new-ish) proposition in EA discussions is "Strong Longtermism," that the vast majority of the value in the universe is in the far future, and that we need to focus on it. This far future is often understood to be so valuable that almost any amount of preference for the long term is justifiable. 

In this brief post, I want to argue that this strong claim is unnecessary compared to a weaker argument, creates new problems that are easily avoided otherwise, and should be replaced with the weaker claim. (I am far from the first to propose this [EA(p) · GW(p)].)

The 'regular longtermism' claim, as I present it, is that we should assign approximately similar value to the long term future as we do to the short-term. This is a philosophically difficult position which nonetheless, I argue, is superior to either status quo, or strong longtermism.

Philosophical grounding

The typical presentation of longtermism is that if we do not discount future lives exponentially, almost any weight placed on the future, which almost certainly can be massively larger than the present, will overwhelm the value of the present. This is  hard to justify intuitively - it implies that we should ignore the near-term costs, and (taken to the extreme) could justify almost any atrocity in the pursuit of a miniscule reduction of long-term risks.

The typical alternative is presented by naïve economic discounting, which assumes that we should exponentially discount the far future at some finite rate. This leads to claims that a candy bar today is worth more than the entire future of humanity starting in, say, 10,000 years. This is also hard to justify intuitively.

A third perspective roughly justifies the current position; we should discount the future at the rate current humans think is appropriate, but also separately place significant value on having a positive long term future. This preserves both the value of the long-term future of humanity if positive, and the preference for the present. Lacking any strong justification for setting the balance, I will very tentatively claim they should be weighted approximately equally, but this is not critical - almost any non trivial weight on the far future would be a large shift from the status quo towards longer-term thinking. This may be non-rigorous, but has many attractive features.

The key question, it seems, is whether the new view is different, and/or whether the exact weights for the near and long term will matter in practice.

Does 'regular longtermism' say anything?

Do the different positions lead to different conclusions in the short term? If they do not, there is clearly no reason to prefer strong longtermism. If they do, it seems that almost all of these differences are intuitively worrying. Strong longtermism implies we should engage in much larger near term sacrifices, and justifies ignoring near-term problems like global poverty, unless they have large impacts on the far future.  Strong neartermism, AKA strict exponential discounting, implies that we should do approximately nothing about the long term future.

So, does regular longtermism suggest less focus on reducing existential risks, compared to the status quo? Clearly not. In fact, it suggests overwhelmingly more effort should be spent on avoiding existential risk than is currently available for the task. It may suggest less effort than strong longtermism, but only to the extent that we have very strong epistemic reasons for thinking that very large short term sacrifices are effective.

What now?

I am unsure that there is anything new in this post. At the same time, it seems that the debate has crystallized into two camps which I strongly disagree with - the "anti-longtermist" camp, typified by Phil Torres, who is horrified by the potentially abusive view of longtermism, and Vaden Masrani, who wrote a criticism of the idea,  versus the "strong longtermism" camp, typified by Toby Ord and (Edit: see Toby's comment) Will MacAskill, (Edit: See Will's comment.) who seems to imply that Effective Altruism should focus entirely on longtermism. (Edit: I should now say that it turns out that this is a weak-man argument, but also note that several commenters explicitly say they embrace this viewpoint.) 

Given the putative dispute, I would be very grateful if we could start to figure out as a community whether the strong form of longtermism is a tentative question about how to work out a coherent position that doesn't have potentially worrying implications, or if it is intended as a philosophical shibboleth. I will note that my typical mind fallacy view is that both sides actually endorse, or at least only slightly disagree with, my mid-point view, but I may be completely wrong. 

 

  1. Note that Will has called this "very strong longtermism" [EA · GW], but it seems unclear how a line is drawn between very strong and strong forms. This is true especially because the definition-based version he proposes, that human lives in the far future are equally valuable and should not be discounted, seems to lead directly to this very strong longtermist conclusion.
  2. (Edited to add:) In contrast, any split of value between near-term and long-term value completely changes the burden of proof for longtermist interventions. As noted here [EA · GW], given strong longtermism, we would have a clear case for any positive-expectation risk reduction measure, and the only possible response to refute it is a claim that the expectation in terms of reduced risk is negative. With a weaker form, we can perform cost-benefit analysis to decide whether the loss in the near-term is worthwhile.

73 comments

Comments sorted by top scores.

comment by EliezerYudkowsky · 2021-08-08T14:39:14.843Z · EA(p) · GW(p)

The reason we have a deontological taboo against “let’s commit atrocities for a brighter tomorrow” is not that people have repeatedly done this, it worked exactly like they said it would, and millions of people received better lives in exchange for thousands of people dying unpleasant deaths exactly as promised.

The reason we have this deontological taboo is that the atrocities almost never work to produce the promised benefits. Period. That’s it. That’s why we normatively should have a taboo like that.

(And as always in a case like that, we have historical exceptions that people don’t like to talk about because they worked, eg, Knut Haukelid, or the American Revolution. And these examples are distinguished among other factors by a found mood (the opposite of a missing mood) which doesn’t happily jump on the controversial wagon for controversy points, nor gain power and benefit from the atrocity; but quietly and regretfully kills the innocent night watchman who helped you, to prevent the much much larger issue of Nazis getting nuclear weapons.)

This logic applies without any obvious changes to “let’s commit atrocities in pursuit of a brighter tomorrow a million years away” just like it applies to “let’s commit atrocities in pursuit of a brighter tomorrow in 2 years”. Literally any nice thing somebody says you could get would “justify atrocities”, in exactly the same way, if you forgot this rule. If you admit the existence of thousands of American schoolchildren getting suboptimally nutritious lunches, it could, oh no, justify abducting and torturing businessmen into using their ATM cards so you could get more money for the schoolchildren. Obviously then those children must not exist, or maybe they don’t have qualia so their suffering won’t be important, because if they existed and mattered that could justify atrocities, couldn’t it?

There is nothing special about longtermism compared to any other big desideratum in this regard. It is 100% unjustified special attention because people don’t like the desideratum itself. The same way that people ask “How can we spend money on AI safety when children are starving now?” but their mind doesn’t make the same leap about “How can we spend money on fighting global warming when children are starving now?” or say “Hey maybe we should critique total spending on lipstick advertising before we critique spending on rockets.”

As always, transhumanism done correctly is just humanism.

Replies from: Davidmanheim
comment by Davidmanheim · 2021-08-08T15:02:05.872Z · EA(p) · GW(p)

Agreed, and that's a very good response to a position that one of the sides I critiqued has presented. But despite this and other reasons to reject their positions, I don't think the reverse theoretical claim that we should focus resources exclusively on longtermism is a reasonable one to hold, even while accepting the deontological taboo and dismissing those overwrought supposed fears.

Replies from: zdgroff
comment by zdgroff · 2021-08-08T17:57:16.521Z · EA(p) · GW(p)

There is nothing special about longtermism compared to any other big desideratum in this regard.

 

I'm not sure this is the case. E.g. Steven Pinker in Better Angels makes the case that utopian movements systematically tend to commit atrocities because this all-important end goal justifies anyting in the medium term. I haven't rigorously examined this argument and think it would be valuable for someone to do so, but much of longtermism in the EA community, especially of the strong variety, is based on something like utopia.

One reason why you might intuitively think there would be a relationship is that shorter-term impacts are typically somewhat more bounded; e.g. if thousands of American schoolchildren are getting suboptimal lunches, this obviously doesn't justify torturing hundreds of thousands of people. With the strong longtermist claims it's much less clear that there's any sort of upper bound, so to draw a firm line against atrocities you end up looking to somewhat more convoluted reasoning (e.g. some notion of deontological restraint that isn't completely absolute but yet can withstand astronomical consequences, or a sketchy and loose notion that atrocities have an instrumental downside). 

Replies from: EliezerYudkowsky
comment by EliezerYudkowsky · 2021-08-08T18:21:03.458Z · EA(p) · GW(p)

There’s nothing convoluted about it! We just observe that historical experience shows that the supposed benefits never actually appear, leaving just the atrocity! That’s it! That’s the actual reason you know the real result would be net bad and therefore you need to find a reason to argue against it! If historically it worked great and exactly as promised every time, you would have different heuristics about it now!

comment by EliezerYudkowsky · 2021-08-08T16:26:36.380Z · EA(p) · GW(p)

The final conclusion here strikes me as just the sort of conclusion that you might arrive at as your real bottom line, if in fact you had an arrived at an inner equilibrium between some inner parts of you that enjoy doing something other than longtermism, and your longtermist parts.  This inner equilibrium, in my opinion, is fine; and in fact, it is so fine that we ought not to need to search desperately for a utilitarian defense of it.  It is wildly unlikely that our utilitarian parts ought to arrive at the conclusion that the present weighs about 50% as much as our long-term future, or 25% or 75%; it is, on the other hand, entirely reasonable that the balance of what our inner parts vote on will end up that way.  I am broadly fine with people devoting 50%, 25% or 75% of themselves to longtermism, in that case, as opposed to tearing themselves apart with guilt and ending up doing nothing much, which seems to be the main alternative.  But you're just not going to end up with a utilitarian defense of that bottom line; if the future can matter at all, to the parts of us that care abstractly and according to numbers, it's going to end up mattering much more than the present; equivalently, any rationalization like exponential discounting that can imply averting this, is going to imply that it is better to eat an ice cream today and destroy a galaxy of happy sapient beings in ten million years.  This is crazy, and I think it makes a lot more sense to just admit that part of you cares about galaxies and part of you cares about ice cream and say that neither of these parts are going to be suppressed and beaten down inside you.

I think this is what actually yields an appeal of "regular longtermism", and since that's what actually produces the bottom line, I think that what produces this bottom line should just be directly called the justification for it - there's no point in reaching for a different argument for justification than for conclusion-production.

Replies from: Benjamin_Todd, Wei_Dai, Davidmanheim
comment by Benjamin_Todd · 2021-08-08T17:04:44.034Z · EA(p) · GW(p)

Are there two different proposals?

  1. Construct a value function = 0.5* (near term value) + 0.5* (far future value), and do what seems best according to that function.
  2. Spend 50% of your energy on the best longtermist thing and 50% on the best neartermist thing. (Or as a community, half of people do each.)
     

I think Eliezer is proposing (2), but David is proposing (1). Worldview diversification seems more like (2).

I have an intuition these lead different places – would be interested in thoughts.

Edit: Maybe if 'energy' is understood as 'votes from your parts' then (2) ends up the same as (1).

Replies from: Davidmanheim, elliottthornley
comment by Davidmanheim · 2021-08-08T17:14:42.310Z · EA(p) · GW(p)

Ahh - thanks. Yes, if that is what Eliezer is proposing, my above response misunderstood him - but either I misunderstood something, or it would be inconsistent with how I understood his viewpoint elsewhere about why we want to be coherent decision makers.

comment by elliottthornley · 2021-08-09T11:35:28.704Z · EA(p) · GW(p)

I remember Toby Ord gave a talk at GPI where he pointed out the following:

Let L be long-term value per unit of resources and N be near-term value per unit of resources. Then spending 50% of resources on the best long-term intervention and 50% of resources on the best near-term intervention will lead you to split resources equally between A and C. But the best thing to do on a 0.5*(near-term value)+0.5*(long-term value) value function is to devote 100% of resources to B.

Diagram

Replies from: Davidmanheim
comment by Davidmanheim · 2021-08-09T12:43:18.419Z · EA(p) · GW(p)

That's exactly why it's important to clarify this. The position is that the entire value of the future has no more than a 50% weight in your utility function, not that each unit of future value is worth 50% as much.

comment by Wei_Dai · 2021-08-10T06:11:29.962Z · EA(p) · GW(p)

This is crazy, and I think it makes a lot more sense to just admit that part of you cares about galaxies and part of you cares about ice cream and say that neither of these parts are going to be suppressed and beaten down inside you.

Have you read Is the potential astronomical waste in our universe too small to care about? [LW · GW] which asks the question, should these two parts of you make a (mutually beneficial) deal/bet while being uncertain of the size of (the reachable part of) the universe, such that the part of you that cares about galaxies gets more votes in a bigger universe, and vice versa? I have not been able to find a philosophically satisfactory answer to this question.

If you do, then one or the other part of you will end up with almost all of the votes when you find out for sure the actual size of the universe. If you don't, that seems intuitively wrong also, analogous to a group of people who don't take advantage of all possible benefits from trade. (Maybe you can even be Dutch booked, e.g. by someone making separate deals/bets with each part of you, although I haven't thought carefully about this.)

Replies from: EliezerYudkowsky, WilliamKiely, Davidmanheim
comment by EliezerYudkowsky · 2021-09-23T20:27:06.273Z · EA(p) · GW(p)

It strikes me as a fine internal bargain for some nonhuman but human-adjacent species; I would not expect the internal parts of a human to able to abide well by that bargain.

comment by WilliamKiely · 2021-08-13T19:52:01.347Z · EA(p) · GW(p)

I just commented on your linked astronomical waste post:

Wei, insofar as you are making the deal with yourself consider that in the world in which it turns out that the universe could support doing at least 3^^^3 ops you may not be physically capable of changing yourself to work more toward longtermist goals than you would otherwise. (I.e. Human nature is such that making huge sacrifices to your standard of living and quality of life negatively effects your ability to work productively on longtermist goals for years.) If this is the case, then the deal won't work since one part of you can't uphold the bargain. So in the world in which it turns out that the universe can support only 10^120 ops you should not devote less effort to longtermism than you would otherwise, despite being physically capable of devoting less effort.

In a related kind of deal, both parts of you may be capable of upholding the deal, in which case I think such deals may be valid. But it seems to me that you don't need UDT-like reasoning and the deal future to believe that your future self with better knowledge of the size of the cosmic endowment ought to change his behavior in the same way as implied by the deal argument. Example: If you're a philanthropist with a plan to spend $X of your wealth on shortermist philanthropy and $X on longtermist-philanthropy when you're initially uncertain about the size of the cosmic endowment because you think this is optimal given your current beliefs and uncertainty, then when you later find out that the universe can support 3^^^3 ops I think this should cause you to shift how you spend your $2X to give more toward longtermist philanthropy just because the longtermist philanthropic opportunities now just seem more valuable. Similarly, if you find out that the universe can only support 10^120, then you ought to update to giving more toward short-termist philanthropy.

So is there really a case for UDT-like reasoning plus hypothetical deals our past selves could have made with themselves suggesting that we ought to behave differently than more common reasoning suggests we ought to behave when we learn new things about the world? I don't see it.

Replies from: WilliamKiely
comment by WilliamKiely · 2021-08-13T19:52:18.611Z · EA(p) · GW(p)

Adding to this what's relevant to this thread, re Eliezer's model:

it makes a lot more sense to just admit that part of you cares about galaxies and part of you cares about ice cream and say that neither of these parts are going to be suppressed and beaten down inside you.

The way I think about the 'we can't suppress and beat down our desire for ice cream' is that it's part of our nature to want ice cream meaning that we literally can't just stop having ice cream, at least not without it harming our ability to pursue longtermist goals. (This is what I was referring to when I said above that the longtermist part of you would not be able to fulfill its end of the bargain in the world in which it turns out that the universe can support 3^^^3 ops.)

And we should not deny this fact about ourselves. Rather, we should accept it and go about eating ice cream, caring for ourselves, and working on short-termist goals that are important to us (e.g. reducing global poverty even in cases when it makes no difference to the long term future, to use David's example from the OP).

To do otherwise is to try to suppress and beat something out of you that cannot be taken out of you without harming your ability to productively pursue longtermist goals. (What I'm saying is similar to Julia's Cheerfully post.)

I don't think this is a rationalization in general, though it can be in some cases. Rather, in general, I think it is the correct attitude to take (given a "strong longtermist" view) in response to certain facts about our human nature.

The easiest way to see this is just to look at other people in the world who have done a lot of good or who are doing a lot of good currently. They have not beaten the part of themselves that likes ice cream out of themselves. As such, it is not a rationalization for you to make peace with the fact that you like ice cream and fulfill those wants of yours. Rather, that is the smart thing to do to allow to you to have more cheer and motivation to productively work on longtermist goals.

So I don't have any problem with the conclusion that the overwhelming majority of expected value lies in the long term future. I don't feel any need to reject this conclusion and tell myself that I should accept a different bottom line that reads that 50% of the value is in the long term future and 50% in the short term. Perhaps the behavioral policy I ought to follow is one in which I devote 50% of my time and effort and to myself and my personal goals and 50% of my time and effort to longtermist goals, but that's not because that's not because the satisfaction I get from eating ice cream has great intrinsic value relative to future lives, it's because trying to devote much more of my time and effort to longtermist goals is counterproductive to the goal of advancing those longtermist goals. We know it's generally counterproductive because the other people in the world doing the most longtermist good are not actively trying to deny the part of themselves that cares about things like ice cream.

Replies from: Davidmanheim
comment by Davidmanheim · 2021-08-15T06:57:07.950Z · EA(p) · GW(p)

This isn't really relevant to the point I was making, but the idea that longtermism has objective long-term value, but ice cream now is a moral failing seems to presuppose moral objectivism. And that seems be be your claim - the only reason to value ice cream now is to make us better at improving the long term in practice. And I'm wondering why "humans are intrinsically unable to get rid of value X" is a criticism / shortcoming, rather than a statement about our values that should be considered in maximization. (To some extent, the argument for why to change out values is about coherency / stable time preferences, but that doesn't seem to be the claim here.)

Replies from: WilliamKiely
comment by WilliamKiely · 2021-08-16T02:28:53.587Z · EA(p) · GW(p)

I'm not sure I know what you mean by "moral objectivism" here. To try to clarify my view, I'm a moral anti-realist (though I don't think that's relevant to my point) and I'm fairly confident that the following is true about my values: the intrinsic value of my enjoyment of ice cream is no greater than the intrinsic value of other individuals' enjoyment of ice cream (assuming their minds are like mine and can enjoy it in the same way), including future individuals. I think we live at a time in history where our expected effect on the number of individuals that ultimately come into existence and enjoy ice cream is enormous. As such, the instrumental value of my actions (such as my action to eat or not eat ice cream) generally dwarfs the intrinsic value of my conscious experience that results from my actions. So it's not that there's zero intrinsic value to my enjoyment of ice cream, it's just that that intrinsic value is quite trivial in comparison to the net difference in value of the future conscious experiences that come into existence as a result of my decision to eat ice cream.

The fact that I have to spend some resources on making myself happy in order to do the best job at maxizing value overall (which mostly looks like productively contributing to longtermist goals in my view) is just a fact about my nature. I don't see it as a criticism or shortcoming of my or human nature, just a thing that is true. So our preferences do matter also; it just happens that when trying to do the most good we find that it's much easier to do good for future generations in expectation than it is to do good for ourselves. So the best thing to do ends up being to help ourselves to the degree that helps us help future generations the most (such that helping ourselves any more or less causes us to do less for longtermism). I think humane nature is such that that optimal balance looks like us making ourselves happy, as opposed to us making great sacrifices and living lives of misery for the greater good.

Let me know if you're still unsure why I take the view that I do.

Replies from: Davidmanheim
comment by Davidmanheim · 2021-08-16T17:05:41.438Z · EA(p) · GW(p)

I think I can restate your view;  there is no moral objective truth, but individual future lives are equally valuable to individual present lives,  (I assume we will ignore the epistemic and economic arguments for now,) and your life in particular has no larger claim on your values than anyone else's. 

That certainly isn't incoherent, but I think it's a view that few are willing to embrace - at least in part because even though you do admit that personal happiness, or caring for those close to you, is instrumentally useful, you also claim that it's entirely contingent, and that  if new evidence were to emerge, you would endorse requiring personal pain to pursue greater future or global benefits.

Replies from: WilliamKiely
comment by WilliamKiely · 2021-08-17T22:02:28.083Z · EA(p) · GW(p)

I think that's an accurate restatement of my view, with the caveat that I do have some moral uncertainty, i.e. give some weight to the possibility that my true moral values may be different. Additionally, I wouldn't necessarily endorse that people be morally required to endure personal pain; personal pain would just be necessary to do greater amounts of good.

I think the important takeaway is that doing good for future generations via reducing existential risk is probably incredibly important, i.e. much more than half of expected future value exists in the long-term future (beyond a few centuries or millenia from now).

comment by Davidmanheim · 2021-08-10T16:52:22.657Z · EA(p) · GW(p)

I had not seen this, and it definitely seem relevant - but it's still much closer to strong longtermism than what I'm (tentatively) suggesting.

comment by Davidmanheim · 2021-08-08T17:11:14.397Z · EA(p) · GW(p)

Agreed - upon reflection, this was what wrote my bottom line, and yes, this seemed like essentially the only viable way of approaching longtermism, according to my intuitions. This also seems to match the moral intuitions of many people I have spoken with, given the various issues with the alternatives. And I didn't try to claim that 50% specifically was justified by anything - as you pointed out, almost any balance of shortermism and longtermism could be an outcome of what many humans actually embrace, but as I argued, if we are roughly utilitarian in each context with those weights, the different options lead to very similar conclusions in most contexts. 

Given that if we are willing to be utilitarian  by weighting across these two preferences, I believe that any one such weighting will lead to a coherent preference ordering - which is valuable if we don't want to be Dutch booked, among other things.  But I don't think that it's in some way more correct to start with "time-impartial utilitarianism is the correct objective morality," and ignore actual human intuitions about what we care about, which you seem to imply is the single coherent longtermist position, while my approach is only justified by preventing analysis paralysis - but perhaps I misunderstood.

 

comment by Benjamin_Todd · 2021-08-08T17:18:39.607Z · EA(p) · GW(p)

No-one is proposing we go 100% on strong longtermism, and ignore all other worldviews, uncertainty and moral considerations.

You say:

the "strong longtermism" camp, typified by Toby Ord and Will MacAskill, who seem to imply that Effective Altruism should focus entirely on longtermism. 

They wrote a paper about strong longtermism, but this paper is about clearly laying out a philosophical position, and is not intended as an all-considered assessment of what we should do. (Edit: And even the paper is only making a claim about what's best at the margin; they say in footnote 14 they're unsure whether strong longtermism would be justified if more resources were already spent on longtermism.)

In The Precipice – which is more intended that way - Toby is clear that he thinks existential risk should be seen as "a" key global priority, rather than "the only" priority. 

He also suggests the rough target of spending 0.1% of GDP on reducing existential risk, which is quite a bit less than 100%.

And he's clearly supported other issues with his life.

Will  is taking a similar approach in his new book about longtermism.

Even the most longtermist members of effective altruism typically think [EA · GW] we should allocate about 20% of resources to neartermist efforts. No-one says longtermist causes are astronomically more impactful.

I think there are bunch of ways we could make this weakening more precise, whether that's defining a weaker form of longtermism, worldview diversification, moral uncertainty etc. and that's an interesting discussion to be had. But I think it's important to start by pointing out that as far as I'm aware no key researchers hold the position you're contrasting against.

Replies from: RyanCarey, Halstead, tobytrem, JackRyan, Darius_Meissner
comment by RyanCarey · 2021-08-08T17:37:54.403Z · EA(p) · GW(p)

No-one says longtermist causes are astronomically more impactful.

Not that it undermines your main point - which I agree with, but a fair minority of longtermists certainly say and believe this.

Replies from: Darius_Meissner, Benjamin_Todd
comment by Darius_M (Darius_Meissner) · 2021-08-08T18:51:01.571Z · EA(p) · GW(p)

There is a big difference between (i) the very plausible claim that the value of the long-term (in terms of lives, experiences, etc.) is astronomically larger than the value of the near-term, and (ii) the rather implausible claim that interventions targeted at improving the long-term are astronomically more important/cost-effective than those targeted at improving the near-term. It seems to me that many longtermists believe (i) but that almost no-one believes (ii).

Basically, in this context the same points apply that Brian Tomasik made in his essay "Why Charities Usually Don't Differ Astronomically in Expected Cost-Effectiveness" (https://reducing-suffering.org/why-charities-dont-differ-astronomically-in-cost-effectiveness/)

Replies from: Alex HT, jackmalde, Habryka, Davidmanheim
comment by Alex HT · 2021-08-09T11:22:48.127Z · EA(p) · GW(p)

I tentatively believe (ii), depending on some definitions. I'm somewhat surprised to see Ben and Darius implying it's a really weird view, and makes me wonder what I'm missing.

I don't want the EA community to stop working on all non-longtermist things. But the reason is because I think many of those things have positive indirect effects on the EA community. (I just mean indirect effects on the EA community, and maybe on the broader philanthropic community, I don't mean indirect effects more broadly in the sense of 'better health in poor countries' --> 'more economic growth' --> 'more innovation')

For example non-longtermist interventions are often a good way to demonstrate EA ideas and successes (eg. pointing to GiveWell is really helpful as an intro to EA); non-longtermist causes are a way for people to get involved with EA and end up working on longtermist causes (eg. Charlotte Siegmann incoming at GPI comes to mind as a great success story along those lines); work on non-longtermist causes has better feedback loops so it might improve the community's skills (eg. Charity Entrepreneurship incubatees probably are highly skilled 2-5 years after the program. Though I'm not sure that actually translates to more skill-hours going towards longtermist causes).

But none of these reasons are that I think the actual intended impact of non-longtermist interventions is competitive with longtermist interventions. Eg. I think Charity Entrepreneurship is good because it's creating a community and culture of founding impact-oriented nonprofits, not because [it's better for shrimp/there's less lead in paint/fewer children smoke tobacco products].  Basically I think the only reasons the near-term interventions might be good is because they might make the long-term future go better.

I'm not sure what counts as 'astronomically' more cost effective, but if it means ~1000x more important/cost-effective I might agree with (ii). It's hard to come up with a good thought experiment here to test this intuition. 

One hypothetical is 'would you rather $10,000 gets donated to the Longterm Future Fund, or $10 mil gets donated to Give Well's Maximum Impact Fund'. This is confusing though, because I'm not sure how important extra funding is in these areas. Another hypothetical is 'would you rather 10 fairly smart people devote their careers to longtermist causes (eg. following 80k advice), or 10,000 fairly smart people devote their careers to neartermist causes (eg. following AAC advice)'. This is confusing because I expect 10,000 people working on effective animal advocacy to have some  effect on the long-term future. Some of them might end up working on nearby long-termist things like digital sentience. They might slightly shift the culture of veganism to be more evidence-based and welfarist, which could lead to faster flow of people from veganism to EA over time.  They would also do projects which EA could point to as success, which could be helpful for getting more people into EA and eventually into longtermist causes.

If I try to imagine a version of this hypothetical without those externalities, I think I prefer the longtermist option, indicating that the 1000x difference seems plausible to me.

I wonder if some of the reasons people don't hold the view I do is some combination of (1) 'this feels weird so maybe it's wrong' and (2) 'I don't want to be unkind to people working on neartermist causes'. 

I think (1) does carry some weight and we should be cautious when acting on new, weird ideas that imply strange actions. However, I'm not sure how much longtermism actually falls into this category. 

  • The idea is not that new, and there's been quite a lot of energy devoted to criticising the ideas. I don't know what others in this thread think, but I haven't found much of this criticism very convincing.
  • Weak longtermism (future people matter morally) is intuitive for lots of people (though not all, which is fine). I concede strong longtermism is initially very intuitive though
  • Strong longtermism doesn't imply we should do particularly weird things. It implies we should do things like: get prepared for pandemics, make it harder for people to create novel pathogens, reduce the risk of nuclear war, take seriously the facts that we can't get current AI systems to do what we want but AI systems are quickly becoming really impressive, and some/most kinds of trend-extrapolation or forecasts imply AGI in the next 10-120 years. Sure, strong longtermism implies we shouldn't prioritise helping people in extreme poverty. But helping people in extreme poverty is not the default action, most people don't spend any resources on that at all. (This is similar to Eliezer's point above).

I also feel the weight of (2). It makes me squirm to reconcile my tentative belief in strong longtermism with my admiration of many people who do really impressive work on non-longtermist causes and my desire to get along with those people. I really think longtermists shouldn't make people who work on other causes feel bad. However, I think it's possible to commit to strong longtermism without making other people feel attacked, or too unappreciated. And I don't think these kinds of social considerations have any bearing on which cause to prioritise working on. 

I feel like a big part of the edge of the EA and rationality community is that we follow arguments to their conclusions even when it's weird, or it feels  difficult, or we're not completely sure. We make tradeoffs even when it feels really hard - like working on reducing existential risk instead of  helping people in extreme poverty or animals in factory farms today.

I feel like I also need to clarify some things:

  • I don't try to get everyone I talk to to work on longtermist things. I don't think that would be good for the people I talk to, the EA community, or the longterm future
  • I really value hearing arguments against longtermism. These are helpful for finding out if longtermism is wrong, figuring out the best ways to explain longtermism, and spotting potential failure modes of acting on longtermism. I sometimes think about paying someone to write a really good, clear case for why acting on strong longtermism is most likely to be a bad idea
  • My all-things-considered view is a bit more moderate than this comment suggests, and I'm eager to hear Darius', Ben's, and others views on this
Replies from: Darius_Meissner
comment by Darius_M (Darius_Meissner) · 2021-08-09T16:54:01.966Z · EA(p) · GW(p)

I'm not sure what counts as 'astronomically' more cost effective, but if it means ~1000x more important/cost-effective I might agree with (ii).

This may be the crux - I would not count a ~ 1000x multiplier as anywhere near "astronomical" and should probably have made this clearer in my original comment. 

Claim (i), that the value of the long-term (in terms of lives, experiences, etc.) is astronomically larger than the value of the near-term,  refers to differences in value of something like 1030 x.

All my comment was meant to say is that it seems highly implausible that something like such a 1030x multiplier also applies to claim (ii), regarding the expected cost-effectiveness differences of long-term targeted versus near-term targeted interventions.

It may cause significant confusion if the term "astronomical" is used in one context to refer to a 1030x multiplier and in another context to a 1000x multiplier.

comment by jackmalde · 2021-08-08T19:38:18.496Z · EA(p) · GW(p)

It seems to me that many longtermists believe (i) but that almost no-one believes (ii).

Really? This surprises me. Combine (i) with the belief that we can tractably influence the far future and don't we pretty much get to (ii)?

Replies from: Darius_Meissner
comment by Darius_M (Darius_Meissner) · 2021-08-09T16:08:38.869Z · EA(p) · GW(p)

No, we probably don’t. All of our actions plausibly affect the long-term future in some way, and it is difficult to (be justified to) achieve very high levels of confidence about the expected long-term impacts of specific actions. We would require an exceptional  degree of confidence to claim that the long-term effects of our specific longtermist intervention are astronomically (i.e. by many orders of magnitude) larger than the long-term effects of some random neartermist interventions (or even doing nothing at all). Of course, this claim is perfectly compatible with longtermist interventions being a few orders of magnitude more impactful in expectation than neartermist interventions (but the difference is most likely not astronomical).

Brian Tomasik eloquently discusses this specific question in the above-linked essay. Note that while his essay focuses on charities, the same points likely apply to interventions and causes:

Occasionally there are even claims [among effective altruists] to the effect that "shaping the far future is 1030 times more important than working on present-day issues," based on a naive comparison of the number of lives that exist now to the number that might exist in the future.

I think charities do differ a lot in expected effectiveness. Some might be 5, 10, maybe even 100 times more valuable than others. Some are negative in value by similar amounts. But when we start getting into claimed differences of thousands of times, especially within a given charitable cause area, I become more skeptical. And differences of 1030 are almost impossible, because everything we do now may affect the whole far future and therefore has nontrivial expected impact on vast numbers of lives.

It would require razor-thin exactness to keep the expected impact on the future of one set of actions 1030 times lower than the expected impact of some other set of actions. (…) Note that these are arguments about ex ante expected value, not necessarily actual impact. (…) Suggesting that one charity is astronomically more important than another assumes a model in which cross-pollination effects are negligible.

Brian Tomasik further elaborates on similar points in a second essay, Charity Cost-Effectiveness in an Uncertain World. A relevant quote:

When we consider flow-through effects of our actions, the seemingly vast gaps in cost-effectiveness among charities are humbled to more modest differences, and we begin to find more worth in the diversity of activities that different people are pursuing.

Replies from: anonymous_ea, jackmalde
comment by anonymous_ea · 2021-08-09T19:46:54.828Z · EA(p) · GW(p)

Phil Trammell's point in  Which World Gets Saved [EA · GW] is also relevant: 

It seems to me that there is another important consideration which complicates the case for x-risk reduction efforts, which people currently neglect. The consideration is that, even if we think the value of the future is positive and large, the value of the future conditional on the fact that we marginally averted a given x-risk may not be.

...

Once we start thinking along these lines, we open various cans of worms. If our x-risk reduction effort starts far "upstream", e.g. with an effort to make people more cooperative and peace-loving in general, to what extent should we take the success of the intermediate steps (which must succeed for the x-risk reduction effort to succeed) as evidence that the saved world would go on to a great future? Should we incorporate the fact of our own choice to pursue x-risk reduction itself into our estimate of the expected value of the future, as recommended by evidential decision theory, or should we exclude it, as recommended by causal? How should we generate all these conditional expected values, anyway?

Some of these questions may be worth the time to answer carefully, and some may not. My goal here is just to raise the broad conditional-value consideration which, though obvious once stated, so far seems to have received too little attention. (For reference: on discussing this consideration with Will MacAskill and Toby Ord, both said that they had not thought of it, and thought that it was a good point.) In short, "The utilitarian imperative 'Maximize expected aggregate utility!'" might not really, as Bostrom (2002) puts it, "be simplified to the maxim 'Minimize existential risk'".

comment by jackmalde · 2021-08-09T18:31:28.273Z · EA(p) · GW(p)

For the record I'm not really sure about 1030 times, but I'm open 1000s of times.

And differences of 1030 are almost impossible, because everything we do now may affect the whole far future and therefore has nontrivial expected impact on vast numbers of lives.

Pretty much every action has an expected impact on the future in that we know it will radically alter the future  e.g. by altering the times of conceptions and therefore who lives in the future. But that doesn't necessarily mean we have any idea on the magnitude or sign of this expected impact. When it comes to giving to the Against Malaria Foundation for example I have virtually no idea of what the expected long-run impacts are and if this would even be positive or negative - I'm just clueless. I also have no idea what the flow-through effects of giving to AMF are on existential risks. 

If I'm utterly clueless about giving to AMF but I think giving to an AI research org has an expected value of 1030 then in a sense my expected value of giving to the AI org is astronomically greater than giving to AMF (although it's sort of like comparing 1030 to undefined so it does get a bit weird...).

Does that make any sense?

comment by Habryka · 2021-08-09T08:38:23.090Z · EA(p) · GW(p)

the rather implausible claim that interventions targeted at improving the long-term are astronomically more important/cost-effective than those targeted at improving the near-term. It seems to me that many longtermists believe (i) but that almost no-one believes (ii).

I think I believe (ii), but it's complicated and I feel a bit confused about it. This is mostly because many interventions that target the near-term seem negative from a long-term perspective, because they increase anthropogenic existential risk by accelerating the speed of technological development. So it's pretty easy for there to be many orders of magnitude in effectiveness between different interventions (in some sense infinitely many, if I think that many interventions that look good from a short-term perspective are actually bad in the long term).

Replies from: Darius_Meissner
comment by Darius_M (Darius_Meissner) · 2021-08-09T16:20:04.105Z · EA(p) · GW(p)

Please see my above response [EA(p) · GW(p)] to jackmalde's comment. While I understand and respect your argument, I don't think we are justified in placing high confidence in this  model of the long-term flowthrough effects of near-term targeted interventions. There are many similar more-or-less plausible models of such long-term flowthrough effects, some of which would suggest a positive net effect of near-term targeted interventions on the long-term future, while others would suggest a negative net effect. Lacking strong evidence that would allow us to accurately assess the plausibility of these models, we simply shouldn't place extreme weight on one specific model (and its practical implications) while ignoring other models (which may arrive at the opposite conclusion). 

Replies from: Habryka
comment by Habryka · 2021-08-09T17:35:53.598Z · EA(p) · GW(p)

Yep, not placing extreme weight. Just medium levels of confidence that when summed over, add up to something pretty low or maybe mildly negative. I definitely am not like 90%+ confidence on the flowthrough effects being negative.

comment by Davidmanheim · 2021-08-08T18:56:25.529Z · EA(p) · GW(p)

I'm unwilling to pin this entirely on the epistemic uncertainty, and specifically don't think everyone agrees that, for example, interventions targeting AI safety aren't the only thing that matters, period. (Though this is arguably not even a longtermist position.)

But more generally, I want to ask the least-convenient-world question of what the balance should be if we did have certainty about impacts, given that you seem to agree strongly with (i).

comment by Benjamin_Todd · 2021-08-08T17:45:56.884Z · EA(p) · GW(p)

I was talking about the EA Leaders Forum results, where people were asked to compare dollars to the different EA Funds, and most were unwilling to say that one fund was even 100x higher-impact than another; maybe 1000x at the high end. That's rather a long way from 10^23 times more impactful.

Replies from: RyanCarey, Davidmanheim
comment by RyanCarey · 2021-08-08T18:49:48.332Z · EA(p) · GW(p)

Cool. Yeah, EA funds != cause areas. Because people may think that work done by EA funds in a cause area is net positive, whereas the total of work done in that area  is negative. Or they may think that work done on some cause is 1/100th as useful another cause, but only because it might recruit talent to the other, which is the sort of hard-line view that one might want to mention.

Replies from: Habryka, Benjamin_Todd
comment by Habryka · 2021-08-08T19:11:28.601Z · EA(p) · GW(p)

Indeed, I took that survey one year, and the reason why I wouldn't put the difference at 10^23 or something extremely large than that is because there are flowthrough effects of other cause areas that still help with longtermist stuff (like, GiveWell has been pretty helpful for also getting more work to happen on longtermist stuff).

I do think that as a cause area from a utilitarian perspective, interventions that affect the longterm future are astronomically more effective than things that help the short term future but are very unlikely to have any effect on the long term, or even slightly harm the longterm.

comment by Benjamin_Todd · 2021-08-08T19:22:15.431Z · EA(p) · GW(p)

Sure, though I still think it makes it misleading to say that the survey respondents think "EA should focus entirely on longtermism". 

Seems more accurate to say something like "everyone agrees EA should focus on a range of issues, though people put different weight on different reasons for supporting them, including long & near term effects, indirect effects, coordination, treatment of moral uncertainty, and different epistemologies."

Replies from: Habryka, RyanCarey
comment by Habryka · 2021-08-09T08:43:55.190Z · EA(p) · GW(p)

To be clear, my primary reason for why EA shouldn't entirely focus on longtermism is because that would to some degree violate some implicit promises that the EA community has made to the external world. If that wasn't the case, I think it would indeed make sense to deprioritize basically all the non-longtermist things.

To some degree my response to this situation is "let's create a separate longtermist community, so that I can indeed invest in that in a way that doesn't get diluted with all the other things that seem relatively unimportant to me". If we had a large and thriving longtermist community, it would definitely seem bad to me to suddenly start investing into all of these other things that EA does that don't really seem to check out (to me) from a utilitarian perspective, and I would be sad to see almost any marginal resources moved towards the other causes.

Replies from: Davidmanheim
comment by Davidmanheim · 2021-08-09T12:49:22.745Z · EA(p) · GW(p)

I'm strongly opposed to this, and think we need to be clear: EA is a movement of people with different but compatible values, dedicated to understanding  and it's fine for you to discuss why you think longtermism is valuable, but it's not as though anyone gets to tell the community what values the community should have. 

The idea that there is a single "good" which we can objectively find and then maximize is a bit confusing to me, given that we know values differ. (And this has implications for AI alignment, obviously.) Instead, EA is a collaborative endeavor of people with compatible interests - if strong-longtermists' interests really are incompatible with most of EA, as yours seem to be, that's a huge problem - especially because many of the people who seem to embrace this viewpoint are in leadership positions. I didn't think it was the case that there was such a split, but perhaps I am wrong.

Replies from: Habryka
comment by Habryka · 2021-08-09T17:44:02.125Z · EA(p) · GW(p)

I think we don't disagree?

I agree, EA is a movement of different but compatible values, and given its existence, I don't want to force anything on it, or force anyone to change their values. It's a great collaboration of a number of people with different perspectives, and I am glad it exists. Indeed the interests of different people in the community are pretty compatible, as evidenced by the many meta interventions that seem to help many causes at the same time.

I don't think my interests are incompatible with most of EA, and am not sure why you think that? I've clearly invested a huge amount of my resources into making the broader EA community better in a wide variety of domains, and generally care a lot about seeing EA broadly get more successful and grow and attract resources, etc.

But I think it's important to be clear which of these benefits are gains from trade, vs. things I "intrinsically care about" (speaking a bit imprecisely here). If I could somehow get all of these resources and benefits without having to trade things away, and instead just build something that was more directly aligned with my values of similar scale and level of success, that seems better to me. I think historically this wasn't really possible, but with longtermist stuff finding more traction, I am now more optimistic about it. But also, I still expect EA to provide value for the broad range of perspectives under its tend, and expect that investing in it in some capacity or another will continue to be valuable.

Replies from: Davidmanheim
comment by Davidmanheim · 2021-08-09T18:45:43.546Z · EA(p) · GW(p)

Sorry, this was unclear, and I'm both not sure that we disagree, and want to apologize if  it seemed like I was implying that you haven't done a tremendous amount for the community, and didn't hope for its success, etc. I do worry that there is a perspective (which you seem to agree with) that if we magically removed all the various epistemic issues with knowing about the long term impacts of decisions, longtermists would no longer be aligned with others in the EA community. 

I also think that longtermism is plausibly far better as a philosophical position than as a community, as mentioned in a different comment [EA(p) · GW(p)], but that point is even farther afield, and needs a different post and a far more in-depth discussion.

comment by RyanCarey · 2021-08-09T09:46:11.407Z · EA(p) · GW(p)

Agree it's more accurate. How I see it: 
> Longtermists overwhelmingly place some moral weight on non-longtermist views and support the EA community carrying out some non-longtermist projects. Most of them, but not all, diversify their own time and other resources  across longtermist and non-longtermist projects. Some would prefer to partake in a new movement that focused purely on longtermism, rather than EA.

Replies from: Davidmanheim
comment by Davidmanheim · 2021-08-09T12:41:42.018Z · EA(p) · GW(p)

Worth noting the ongoing discussions about how longtermism is better thought of / presented as a philosophical position rather than a social movement. 

The argument is something like: just like effective altruists can be negative utilitarians or deontologists or average utilitarians, and just like they can have differing positions about the value of animals, the environment, and wild animal suffering, they can have different views about longtermism. And just like policymakers take different viewpoints into account without needing to commit to anything, longtermism as a position can exist without being a movement you need to join.

comment by Davidmanheim · 2021-08-08T19:01:49.933Z · EA(p) · GW(p)

Good points, but if I understand what you're saying, that survey was asking about specific interventions funded by those funds, given our epistemic uncertainties, not the balance of actual value in the near term versus the long term, or what the ideal focus should be if we found the optimal investments for each.

comment by Halstead · 2021-08-08T19:52:24.341Z · EA(p) · GW(p)

I do think it is important to distinguish these moral uncertainty reasons from moral trade and cooperation and strategic considerations for hedging. My argument for putting some focus on near-termist causes would be of this latter kind; the putative moral uncertainty/worldview diversification arguments for hedging carry little weight with me. 

As an example, Greaves and Ord argue that under the expected choiceworthiness approach, our metanormative ought is practically the same as the total utilitarian ought.

It's tricky because the paper on strong longtermism makes the theory sound like it does want to completely ignore other causes - eg 'short-term effects can be ignored'. I think it would be useful to have a source to point to that states 'the case for longtermism' without giving the impression that no other causes matter.

comment by tobytrem · 2021-08-10T16:00:41.176Z · EA(p) · GW(p)

Just to second this because it seems to be a really common mistake- Greaves and MacAskill stress in the strong longtermism paper that the aim is to advance an argument about what someone should do with their impartial altruistic budget (of time or resources), not to tell anyone how large that budget should be in the first place. 

Also- I think the author would be able to avoid what they see as a "non-rigorous" decision to weight the short-term and long-term the same by reconceptualising the uneasiness around longtermism dominating their actions as an uneasiness with their totally impartial budget taking up more space in their life. I think everyone I have talked to about this feels a pull to support present day people and problems alongside the future, so it might help to just bracket off the present day section of your commitments away from the totally impartial side, especially if the argument against the longtermist conclusion is that it precludes other things you care about.  No one can live an entirely impartial life and we should recognise that, but this doesn't necessarily mean that the arguments for the rightness of doing so are wrong. 

Replies from: Davidmanheim
comment by Davidmanheim · 2021-08-11T06:48:57.700Z · EA(p) · GW(p)

Thanks, that is valuable, but there are a couple of pieces here I want to clarify. I agree that there is space for people to have a budget for non-impartial altruistic donations. I am arguing that within the impartial altruistic budget, we should have a place for a balance between discounted values that emphasize the short term and impartial welfarist longtermism. Perhaps this is what you mean by "bracket off the present day section of your commitments away from the totally impartial side." 

For example, I give at least 10% of my budget to altruistic causes, but I reserve some of the money for Givewell, Against Malaria Foundation, and similar charities, rather than focusing entirely on longtermist causes. This is in part moral uncertainty, at least on my part, since even putting aside the predictability argument, the argument for prioritizing possible future lives rests on a few assumptions that are debatable.

But I'm very unhappy with the claim that "No one can live an entirely impartial life and we should recognise that," which is largely what led to the post. This type of position implies, among other things, that morality is objective and independent of instantiated human values, and that we're saying everyone is morally compromised. If what we are claiming as impartial welfare maximization requires that philosophical position, and we also agree it's not something people can do in practice, I'd argue we are doing something fundamentally wrong both in practice, condemning everyone for being immoral while saying they should do better, and in theory, saying that longtermist EA only works given an objective utilitarian position on morality. Thankfully, I disagree, and I think these problems are both at least mostly fixable, hence my (still-insufficient, partially worked out) argument in the post. But I wasn't trying to solve morality ab initio based on my intuitions. And perhaps I need to extend it to the more general position of how to allocate money and effort across both personal and altruistic spending - which seems like a far larger if not impossible general task. 

Replies from: tobytrem
comment by tobytrem · 2021-08-11T08:21:00.901Z · EA(p) · GW(p)

Thanks for the post and the response David, that helpfully clarifies where you are coming from. What I was trying to get at is that if you want to say that strong longtermism isn't the correct conclusion for an impartial altruist who wants to know what to do with their resources, then that would call for more argument as to where the strong longtermist's mistake lies or where the uncertainty should be. On the other hand, it would be perfectly possible to say that the impartial altruist should end up endorsing strong longtermism, while recognising that you yourself are not entirely impartial (and have done with the issue). Personally I also think that strong longtermism relies on very debatable grounds, and I would also put some uncertainty on the claim "the impartial altruist should be a strong longtermist"- the tricky and interesting thing is working out where we disagree with the longtermist. 

(also I recognise as you said that this post is not supposed to be a final word on all these problems, I'm just pointing to where the inquiry could go next). 

On the second part of your response, I think that depends on what motivates you and what your general worldview is. I don't believe in objective moral facts, but I also generally see the world as a place where each and all could do better. For some that helps motivate action, for some it causes angst- I don't think there is a correct view there. 

Separately I do actually worry that strong longtermism only works for consequentialists (though you don't have to believe in objective morality). The recent paper attempts to make the foundations more robust but the work there is still in its infancy. I guess we will see where it goes. 

Replies from: Davidmanheim
comment by Davidmanheim · 2021-08-11T11:31:29.933Z · EA(p) · GW(p)

Thanks for the response - I think we mostly agree, at least to the extent that these questions have answers at all.

Replies from: tobytrem
comment by tobytrem · 2021-08-11T15:22:44.589Z · EA(p) · GW(p)

Definitely, cheers!

comment by Jack R (JackRyan) · 2021-08-08T18:09:54.275Z · EA(p) · GW(p)

I don’t think your point about Toby’s GDP recommendation is inconsistent with David’s claim that Toby/Will seem to imply “Effective Altruism should focus entirely on longtermism” since EA is not in control of all of the world’s GDP. It’s consistent to recommend EA focus entirely on longtermism and that the world spend .1% of GDP on x-risk (or longtermism).

Replies from: Benjamin_Todd, Davidmanheim
comment by Benjamin_Todd · 2021-08-08T19:10:10.027Z · EA(p) · GW(p)

I agree it's not entailed by that, but both Will and Toby were also in the Leaders Forum Survey I linked to. From knowing them, I'm also confident that they wouldn't agree with "EA should focus entirely on longtermism".

comment by Davidmanheim · 2021-08-08T18:57:41.193Z · EA(p) · GW(p)

That's a very good point - and if that is the entire claim, I would strongly endorse it. But, from what I have read, that is not what strong longtermism actually claims, according to proponents.

comment by Darius_M (Darius_Meissner) · 2021-08-09T16:41:27.767Z · EA(p) · GW(p)

I'd like to point to the essay Multiplicative Factors in Games and Cause Prioritization as a relevant resource for the question of how we should apportion the community's resources across (longtermist and neartermist) causes:

TL;DR: If the impacts of two causes add together, it might make sense to heavily prioritize the one with the higher expected value per dollar.  If they multiply, on the other hand, it makes sense to more evenly distribute effort across the causes.  I think that many causes in the effective altruism sphere interact more multiplicatively than additive, implying that it's important to heavily support multiple causes, not just to focus on the most appealing one.
 

comment by CarlShulman · 2021-08-17T17:55:17.853Z · EA(p) · GW(p)

FWIW, my own views are more like 'regular longtermism' than 'strong longtermism,' and I would agree with Toby that existential risk should be a global priority, not the global priority. I've focused my career on reducing existential risk, particularly from AI, because it seems like a substantial chance of happening in my lifetime, with enormous stakes and extremely neglected. I probably wouldn't have gotten into it when I did if I didn't think doing so was much more effective than GiveWell top charities at saving current human lives, and outperforming even more on metrics like cost-benefit in $.

Longtermism as such (as one of several moral views commanding weight for me) plays the largest role for things like refuges that would prevent extinction but not catastrophic disaster, or leaving seed vaults and knowledge for apocalypse survivors. And I would say longtermism provides good reason to make at least modest sacrifices for that sort of thing (much more than the ~0 current world effort), but not extreme fanatical ones.

There are definitely some people who are fanatical strong longtermists, but a lot of people who are made out to be such treat it as an important consideration but not one held with certainty or overwhelming dominance over all other moral frames  and considerations. In my experience one cause of this is that if you write about implications within a particular worldview people assume you place 100% weight on it, when the correlation is a lot less than 1. 

I see the same thing happening with Nick Bostrom, e.g. his old Astronomical Waste article explicitly  explores things from a totalist view where existential risk dominates via long-term effects, but also from a person-affecting view where it is balanced strongly by other considerations like speed of development. In Superintelligence he explicitly prefers not making drastic sacrifices of existing people for tiny proportional (but immense absolute) gains to future generations, while also saying that the future generations are neglected and a big deal in expectation.

 

Replies from: William_MacAskill
comment by William_MacAskill · 2021-08-21T08:42:32.423Z · EA(p) · GW(p)

There are definitely some people who are fanatical strong longtermists, but a lot of people who are made out to be such treat it as an important consideration but not one held with certainty or overwhelming dominance over all other moral frames  and considerations. In my experience one cause of this is that if you write about implications within a particular worldview people assume you place 100% weight on it, when the correlation is a lot less than 1. 

 

I agree with this, and the example of Astronomical Waste is particularly notable. (As I understand his views, Bostrom isn't even a consequentialist!). This is also true for me with respect to the CFSL paper, and to an even greater degree for Hilary: she really doesn't know whether she buys strong longtermism; her views are very sensitive to current facts about how much we can reduce extinction risk  with a given unit of resources.

The language-game of 'writing a philosophy article' is very different than 'stating your exact views on a topic' (the former is more about making a clear and forceful argument for a particular view, or particular implication of a view someone might have, and much less about conveying every nuance, piece of uncertainty, or in-practice constraints) and once philosophy articles get read more widely, that can cause confusion. Hilary and I didn't expect our paper to get read so widely - it's really targeted at academic philosophers. 

Hilary is on holiday, but I've  suggested we make some revisions to the language in the paper so that it's a bit clearer to people what's going on. This would mainly be changing  phrases like 'defend strong longtermism' to 'explore the case for strong longtermism', which I think more accurately represents what's actually going on in the paper.

comment by Halstead · 2021-08-08T13:11:24.253Z · EA(p) · GW(p)

I agree that it would be good to have a name for a less contentious form of longtermism similar to the one you propose, which says something like: the longterm deserves a seat at the top table with other commonly accept near-term priorities. 

I suspect one common response might be that due to normative uncertainty, we don't put all of our weight on longtermism but instead hedge across different plausible views. I haven't yet seen a defence of that view that I would view as compelling, so I think it would be valuable to have a less contentious version that we would be willing to stand behind in public

Replies from: Davidmanheim
comment by Davidmanheim · 2021-08-08T14:25:02.016Z · EA(p) · GW(p)

Newberry and Ord's paper on moral parliamentarianism, originally proposed by Bostrom, seems like a reasonable way to arrive there. (Which seems almost ironic, given that they are key proponents of strong longtermism.)

Replies from: Toby_Ord, Benjamin_Todd, jackmalde
comment by Toby_Ord · 2021-08-11T20:59:55.927Z · EA(p) · GW(p)

I don't think I'm a proponent of strong longtermism at all — at least not on the definition given in the earlier draft of Will and Hilary's paper on the topic that got a lot of attention here a while back and which is what most people will associate with the name. I am happy to call myself a longtermist, though that also doesn't have an agreed definition at the moment.

Here is how I put it in The Precipice:

Considerations like these suggest an ethic we might call longtermism, which is especially concerned with the impacts of our actions upon the longterm future. It takes seriously the fact that our own generation is but one page in a much longer story, and that our most important role may be how we shape—or fail to shape—that story. Working to safeguard humanity’s potential is one avenue for such a lasting impact and there may be others too.

My preferred use of the term is akin to being an environmentalist: it doesn't mean that the only thing that matters is the environment, just that it is a core part of what you care about and informs a lot of your thinking.

Replies from: William_MacAskill, Davidmanheim
comment by William_MacAskill · 2021-08-21T08:26:29.033Z · EA(p) · GW(p)

I'm also not defending or promoting strong longtermism in my next book.  I defend (non-strong) longtermism, and the  definition I use is: "longtermism is the view that positively influencing the longterm future is among the key moral priorities of our time." I agree with Toby on the analogy to environmentalism.

(The definition I use of strong longtermism is that it's the view that positively influencing the longterm future is the moral priority of our time.)

Replies from: Davidmanheim
comment by Davidmanheim · 2021-08-21T19:05:45.682Z · EA(p) · GW(p)

Thanks Will - I apologize for mischaracterizing your views, and am very happy to see that I was misunderstanding your actual position. I have edited the post to clarify.

I'm especially happy about the clarification because I think there was at least a perception in the community that you and/or others do, in fact, endorse this position, and therefore that it is the "mainstream EA view," albeit one which almost everyone I have spoken to about the issue in detail seems to disagree with.

comment by Davidmanheim · 2021-08-12T09:26:50.988Z · EA(p) · GW(p)

That's super helpful to see clarified, and I will edit the post to reflect that - thanks!

comment by Benjamin_Todd · 2021-08-08T17:47:59.595Z · EA(p) · GW(p)

It would indeed be ironic - the fact that Toby and Will are major proponents of moral uncertainty seems like more evidence in favour of the view in my top level comment [EA(p) · GW(p)].

Replies from: jackmalde
comment by jackmalde · 2021-08-08T19:21:45.783Z · EA(p) · GW(p)

I don't think it's necessarily clear that incorporating moral uncertainty means you have to support hedging across different plausible views. If one maximises expected choiceworthiness (MEC) for example one can be fanatically driven by a single view that posits an extreme payoff (e.g. strong longtermism!).

Indeed MacAskill and Greaves have argued that strong longtermism seems robust to variations in population axiology and decision theory whilst Ord has argued reducing x-risk is robust to normative variations (deontology, virtue ethics, consequentialism). If an action is robust to axiological variations this can also help it dominate other actions, even under moral uncertainty.

comment by jackmalde · 2021-08-08T19:26:36.568Z · EA(p) · GW(p)

I think Ord's favoured approach to moral uncertainty is maximising expected choice-worthiness (MEC) which he argues for with Will MacAskill.

Reading the abstract of the moral parliamentarianism paper, it isn't clear to me that he is actually a proponent of that approach, just that he has a view on the best specific approach within moral parliamentarianism.

As I say in my comment to Ben, I think an MEC approach to moral uncertainty can lead to being quite fanatical in favour of longtermism.

comment by GidonKadosh · 2021-08-08T09:40:43.340Z · EA(p) · GW(p)

Thank you for this post David. I'd like to add two points that emphasize how important this discussion is, and that its implications are beyond the moral stances of individuals:

1. I believe that when looking at this distinction as a movement, we should also take into account how people are put off by strong longtermism - whether we view regular longtermism as a good entry point for EA ideas, or if we endorse it as a legitimate 'camp'. I think that the core idea of regular longtermism is very appealing when discussing the next few generations, while strong longtermism does imply disregarding current generations and thinking of "all future generations" (which obviously requires most people to think far beyond their current moral circle).

2. In practice, I think that an EA community that has a welcoming space for this mid-point view, would have more emphasis on interventions that are on mid-point position in the tradeoff between tractability (they're more likely to make a change) and importance (they're not as rewarding as preventing human extinction). We would see more emphasis than we currently have on improving institutions, interventions for improving developing economies, meta-science, and others.

comment by elifland · 2021-08-08T21:05:46.144Z · EA(p) · GW(p)

A third perspective roughly justifies the current position; we should discount the future at the rate current humans think is appropriate, but also separately place significant value on having a positive long term future.

 

I feel that EA shouldn't spend all or nearly all of its resources on the far future, but I'm uncomfortable with incorporating a moral discount rate for future humans as part of "regular longtermism" since it's very intuitive to me that future lives should matter the same amount as present ones.

I prefer objections from the epistemic challenge, which I'm uncertain enough about to feel that various factors e.g. personal fit, flow-through effects, gaining experience in several domains means that it doesn't make sense for EA to go "all-in". An important aspect of personal fit is comfort working on very low probability bets.

I'm curious how common this feeling is, vs. feeling okay with a moral discount rate as part of one's view. There's some relevant discussion under the  comment [EA(p) · GW(p)] linked in the post.

Replies from: evelynciara
comment by evelynciara · 2021-08-08T22:17:47.232Z · EA(p) · GW(p)

Yeah. I have this idea that the EA movement should start with short-term interventions and work our way to interventions that operate over longer and longer timescales, as we get more comfortable understanding their long-term effects.

comment by asolomonr · 2021-08-08T21:53:55.491Z · EA(p) · GW(p)

I wonder if a heavy dose of skepticism about longtermist-oriented interventions wouldn't result in a somewhat similar mix of near termist and longtermist prioritization in practice. Specifically, someone might reasonably start with a prior that most interventions aimed at affecting the far future (especially those that don't do so by tangibly changing something in the near term so that there could be strong feedbacks) come out as roughly a wash. This might then put a high burden of evidence on these interventions so that only a few very well founded ones would stand out above near termist-oriented actions. While in this view supposed flow through affects of near termist interventions would also be regarded with strong skepticism and so their long term impact might generally be judged to also come out as a wash, you'd at least get the short term benefit. So then one might often favor near term causes because gathering evidence on them is comparatively easy, but for longtermist interventions that are moderately well grounded, the standard reasoning favoring them would kick in. I think this is often roughly what happens, and might be another explanation for the observation that even proponents of strong longtermism don't generally appear fanatical [? · GW]. 

This is piggybacking a bit off of Darius_Meissner's early comment that distinguishes between the axiological and deontic claims of strong longtermism (to borrow the terminology of Greaves and MacAskill's and paper). Many have pointed out that accepting the former doesn't have to lead to the latter, and this is just a particular reasoning for why. But I wonder why there is a need to have a philosophical basis for what seems like a bottom line that could be reached in practice even by neglecting moral uncertainty but just embracing empirical uncertainty and incorporating Bayesian Priors in EV thinking (as opposed to naive EV reasoning [EA(p) · GW(p)]).

comment by JeremyR · 2021-08-15T05:04:18.908Z · EA(p) · GW(p)

Should "reduction" in the quote below (my emphasis) read "increase?" 

"This is  hard to justify intuitively - it implies that we should ignore the near-term costs, and (taken to the extreme) could justify almost any atrocity in the pursuit of a miniscule reduction of long-term value."

Replies from: Davidmanheim
comment by Davidmanheim · 2021-08-15T06:47:51.567Z · EA(p) · GW(p)

Yeah, it should read "long-term *risk*" - fixing now, thanks!

comment by Harrison D · 2021-08-09T03:52:35.028Z · EA(p) · GW(p)

Me, reading through the post: “I think I might have a minor comment to add, and for once I’m here the day of posting…”

Also me, seeing that there are already 31 comments: “Oh, well then.”

comment by capybaralet · 2021-08-16T21:15:26.537Z · EA(p) · GW(p)

IMO, the best argument against strong longtermism ATM is moral cluelessness.  

comment by [deleted] · 2021-08-17T17:42:58.599Z · EA(p) · GW(p)

FWIW, my own views are more like 'regular longtermism' than 'strong longtermism,' and I would agree with Toby that existential risk should be a global priority, not the global priority. I've focused my career on reducing existential risk, particularly from AI, because it seems like a substantial chance of happening in my lifetime, with enormous stakes and extremely neglected. I probably wouldn't have gotten into it if I didn't think doing so was much more effective than GiveWell top charities at saving current human lives, and outperforming even more on metrics like cost-benefit in $.

Longtermism as such (as one of several moral views commanding weight for me) plays the largest role for things like refuges that would prevent extinction but not catastrophic disaster, or leaving seed vaults and knowledge for apocalypse survivors. And I would say longtermism provides good reason to make at least modest sacrifices for that sort of thing, but not fanatical ones.