Noticing the skulls, longtermism edition

post by Davidmanheim · 2021-10-05T07:08:28.304Z · EA · GW · 68 comments

Epistemic Status: Personal view about longtermism and its critics.

Recently, there have been a series of attacks on longtermism. These largely focus on the (indisputable) fact that avoiding X-risks can be tied to racist or eugenic historical precedents. This should be worrying; a largely white. educated, western, and male group [EA · GW] talking about how to fix everything should raise flags. And neglecting to address the roots of futurism is worrying - though I suspect that highlighting them and attempting apologetics would have been an even larger red flag to many critics. 

At the same time, attacks on new ideas like longtermism are inevitable. New ideas, whether good or bad, are usually controversial. Moreover, any approaches or solutions that are proposed will have drawbacks, and when they are compared to the typical alternative (of ignoring the problem) they will inevitably be worse in some ways. Nonetheless, some portions of the attacks have merit.

In 2017, Scott Alexander wrote a post defending the ideas of the Lesswrong community, Yes, We Have Noticed the Skulls. He says, in part, "the rationalist movement hasn’t missed the concerns that everybody who thinks of the idea of a 'rationalist movement' for five seconds has come up with. If you have this sort of concern, and you want to accuse us of it, please do a quick Google search to make sure that everybody hasn’t been condemning it and promising not to do it since the beginning."

Similarly, there is a real concern about the dangers of various longtermist approaches, but it is one which at least the majority of those who engage with longtermist ideas understand. These attacks looked at some roots of longtermism, but ignore the actual practice and the motives, repeated assertions that we are still uncertain, and the clear evidence that we are and will continue to be interested in engaging with those who disagree.

As the Effective Altruism forum should make abundantly clear, the motivations for the part of the community which embraces longtermism still includes Peter Singer's embrace of practical ethics and effective altruist ideas like the Giving Pledge, which are cornerstones of the community's behavior. Far from carrying on the racist roots of many past utopians, we are trying to address them. "What greater racism is there than the horrifically uneven distribution of resources between people all because of an accident of their birth?," as Sanjay noted [EA · GW]. Still, defending existential risk mitigation and longtermism, by noting its proximity and roots in global health and other effective altruist causes is obviously less than a full response.

And in both areas, despite the best of intentions, there are risks that we cause harm, we increase disparities in health and happiness, we promote ideas which are flawed and dangerous, or we otherwise fail to live up to our ideals. Yes, we see the skulls. And yes, some of the proposals that have been put forward have glaring drawbacks which need to be discussed and addressed. I cannot speak for others, though if there is one thing longtermism cannot be accused of, it's insufficient attention to risk.

So I remain wary of the risks - not just the farcical claim that transhumanism is the same as eugenics, or the more reasonable one that some proposed paths towards stability and safety have the potential to worsen inequalities rather than address them, but also immediate issues like gender and racial imbalance within the movement, and the problem of seeing effective altruism as a white man's burden.  The community has a history of engaging with critics, and we should continue to take their concerns seriously.

But all the risks of failure aren't a reason to abandon the project of protecting and improving the future - they are a reason to make sure we continue discussing and planning. I hope that those who disagree with us are willing to join in productive conversation about how we can ensure our future avoids the pitfalls they see. If we do so, there is a real chance that our path forward will not just be paved with good intentions, but lead us towards a better and safer future for everyone.

68 comments

Comments sorted by top scores.

comment by John G. Halstead (Halstead) · 2021-10-05T08:56:23.077Z · EA(p) · GW(p)

I don't find the racism critique of longtermism compelling. Human extinction would be bad for lots of currently existing non-white people. Human extinction would also be bad for lots of possible future non-white people. If future people count equally, then not protecting them would be a great loss for future non-white people. So, working to reduce extinction risks is very good for non-white people.

Replies from: Sean_o_h, Davidmanheim
comment by Sean_o_h · 2021-10-05T09:19:55.169Z · EA(p) · GW(p)

I agree the racism critique is overstated, but I think there's a more nuanced argument for a need for greater representation/inclusion for xrisk reduction to be very good for everyone.

Quick toy examples (hypothetical):
- If we avoid extinction by very rich, nearly all white people building enough sustainable bunkers, human species continues/rebuilds, but not good for non-white people. 
- If we do enough to avoid the xrisk scenarios  (say, getting stuck at the poles with minimal access to resources needed to progress civilisation or something) in climate change, but not enough to avoid massively disadvantaging most of the global south, we badly exacerbate inequality (maybe better than extinction, but not what we might consider a good outcome).

And so forth. So the more nuanced argument might be we (a) need to avoid extinction, but (b) want to do so in such a way that we don't exacerbate inequality and other harms. We stand a better chance of doing the latter by including a wider array of stakeholders than are currently in the conversation.

Replies from: Halstead
comment by John G. Halstead (Halstead) · 2021-10-05T09:39:34.153Z · EA(p) · GW(p)

It seems odd to me to criticise a movement as racist without at least acknowledging that the thing we are working on seems more beneficial for non-white people than the things many other philanthropists work on. The examples you give are hypothetical, so they aren't a criticism of what longtermists do in the real world. Most longtermists are focused on AI, bio and to a lesser extent climate risk. I fail to see how any of that work has the disparate demographic impact described in the hypotheticals. 

Replies from: Sean_o_h
comment by Sean_o_h · 2021-10-05T10:37:43.041Z · EA(p) · GW(p)

Thanks Halstead. I'll try to respond later, but I'd quickly like to be clear re: my own position that I don't perceive longtermism as racist, and/or am not claiming people within it are racist (I consider this a serious claim not to be made lightly).

Replies from: Linch
comment by Linch · 2021-10-05T23:17:22.160Z · EA(p) · GW(p)

and/or am not claiming people within it are racist (I consider this a serious claim not to be made lightly).

Do you mean to say that 

P1: People in X are racist

vs

P2: People in X are not racist

are serious claims that are not to be made lightly? 

(Non- sequitur below, may not be interesting)

For what it's worth, my best guess is that having the burden of proof on P1 is the correct decision procedure in the society we live in, as these accusations have a lot of associated baggage and we don't currently have a socially acceptable way to say naively reasonable things like "Alice is very likely systematically biased against X group to Y degree in Z sphere of life, so I trust her judgements less about X as applied to Z, but all things considered Alice is still a fine person to work or socially interact with." 

But all else equal, a society where having the burden of proof on P2 would be slightly better, as it is a more accurate representation of affairs (see eg for an illustration of what I mean). 

In particular, I think it is an accurate claim that most humans and most human institutions in most times and places are at least somewhat racist* (though I think the demographic compositions that people point to as "problematic"in EA, like higher education levels, should on average probably point towards less racism rather than more). 

Right now social consensus appears to be that we "call out" and socially exile people above a certain (unspecified) degree of racism, and people below that bar are considered "not racist", despite clear statistical evidence to the contrary for anybody who bothers to look.

Unfortunately, my best guess is that this topic is too politically charged for EAs to make much headway with, it overall isn't especially important, and also trying to do so may draw the attention of hostile actors who may use our missteps here against us. 

So I think my all-things-considered position is that we should basically go with the social consensus of pretending that racism below a certain degree doesn't exist, even though the situation is moderately unfortunate for me personally.

* there are different definitions of racism. The operative definition I use is "do I expect to be treated in statistically distinguishable ways from a demographic twin who happens to be Caucasian (or black, or Indian, etc, depending on the topic of conversation)?" 

Replies from: Sean_o_h, Charles He
comment by Sean_o_h · 2021-10-06T07:41:04.785Z · EA(p) · GW(p)

Thanks Linch. I'd had 
P1: People in X are racist

in mind in terms of "serious claim, not to be made lightly", but I acknowledge your well-made points re: burden of proof on the latter.

I also worry about distribution of claims in terms of signal v noise. I think there's a lot of racism in modern society, much of it glaring and harmful, but difficult to address (or sometimes out of the overton window to even speak about). I don't think matters are helped by critiques that go to lengths to read racism into innocuous texts, as the author of one of the critiques above has done in my view (in other materials, and on social media).

Replies from: Linch
comment by Linch · 2021-10-06T08:40:23.614Z · EA(p) · GW(p)

I agree that reading racism or white supremacy into innocuous texts is harmful, and for the specific instances I'm aware of, it both involved  selective quote mining, and also the mined quote wasn't very damning even out of context. 

comment by Charles He · 2021-10-06T01:15:40.982Z · EA(p) · GW(p)

even though the situation is moderately unfortunate for me personally.

I think a writeup about this would be very interesting, even if short or at a much lower quality/epistemic certainty than many of your other comments.

Unfortunately, my best guess is that this topic is too politically charged for EAs to make much headway with, it overall isn't especially important, and also trying to do so may draw the attention of hostile actors who may use our missteps here against us. 

I agree that it seems that the EV of most meta race discussions seems negative, even though there is substantial perspective that might be useful and seems unsaid. 

For example, steelmanning the Scott Alexander event on both sides would be a useful exercise. This includes steelmanning the NYT writer's assertion of a sort of SV cabal, a perspective and makes their behavior more virtuous and doesn't seem to be discussed.

This steelman for the NYT, against Scott Alexander, would say that the doxxing issue is just a layer/proxy for optical issues which Scott Alexander arguably should bear, which in turn is a layer/proxy for silicon valley power and media. The latter two layers are far more interesting and important than doxxing, despite being unexamined by the rationalist community. 

This steelman is probably represented by the views in this New Yorker article (that avoids most of the loaded racism issues).

It's fascinating to watch this conflict between two intelligentsia on opposite coasts. Both seem truth seeking and worthy of respect, but are in a contest whose nature seem unacknowledged by the rationalist side.

While limited, the relevance to this post and similar discussions is that the New Yorkers perspective, which looks down at the self-importance and arcane re-invention of the rationalist community, is probably the mainstream view. If these perspectives are true, EA probably has to deal with this too when advancing longtermism. 

There's probably ways of dealing with this issue (that might be better than chalking up issues to presentation or "weirdness") but this seems very hard and I haven't thought about this and I feel like I will write something dumb. Also, I think there's low demand for this comment, which is already very long.

This is somewhat relevant to the top level post and the articles it refers to (that seem lower in quality than the New Yorker article).

Replies from: Linch, Linch
comment by Linch · 2021-10-06T03:01:31.463Z · EA(p) · GW(p)

What’s fascinating about this is the conflict between two intelligentsia on opposite coasts. Both seem truth seeking and worthy of respect, but are in a contest whose nature and stakes seem unacknowledged.

For what it's worth, I got the opposite impression. I think neither side is particularly truth-seeking, and much more out to "win" rather than be deeply concerned with what is true. My own experience during the whole SSC/NYT affair was to get very indignant and follow @balajis* (who I've since muted), a tech personality with a crusade against tech journalism, and reading him only helped amplify my sense of zealotry against conventional media. On reflection this was very far from my ideals or behaviors I'd like to have going forwards, and I consider my behavior then moderately large evidence against my own truth-seeking. 

I think the SSC/NYT event was a fitting culmination of the Toxoplasma of Rage that SSC itself warned us about, and some members of our movement, myself included, was nontrivially corrupted by external bad faith actors (on both sides). 

* To be clear this is not a condemnation of him as a person or for his work or anything, just his Twitter personality.

Replies from: Charles He
comment by Charles He · 2021-10-06T07:17:43.799Z · EA(p) · GW(p)

It seems like you are describing a difficult personal experience. I think the rationalist community and Scott Alexander are altruistic and virtuous, so it seems having been involved/going through the journey in the way I think you are describing would make anyone indignant.

I did not have the same experience with this incident but I have had beliefs and made many  poor decisions I have regretted, in very different domains/places, almost certainly with much worse epistemics than you.

comment by Linch · 2021-10-06T02:54:57.849Z · EA(p) · GW(p)

even though the situation is moderately unfortunate for me personally.

I think a writeup about this is very interesting, even if short or at a much lower quality/epistemic certainty than many of your other comments.

I don't think there's anything particularly interesting here. The short compression of my views is that different people have competing access needs, and I don't feel like I have a safe space outside of a very small subset of my friends to say something pretty simple and naively reasonable like 

I view that my /your interaction with this system/person is parsimoniously explained by either racism or a conjunction of factors that include racism. I would like your help in verifying whether the evidence checks out, as I tend to get emotional about this kind of thing. I would also like to talk about mitigation strategies like how I can minimize this type of interaction in the future. No, I am not claiming that this system deserves to burn down/this person ought to be cancelled. Yes, I think the system/person is probably fine in the grand scheme of things.

without basically getting embroiled in a proxy culture war. I feel like many people (even ones I naively would have thought to be fairly reasonable) would rush to defend the system/person if they like the system/person against any charges of racism that doesn't have enough evidence to be convicted in court. Or worse, immediately rush to "my defense" and get very indignant on my behalf without being very objective about the whole thing, even though given that I was the one who was emotional at the time, them being more emotional is less helpful (I say "worse" on epistemic grounds even though in the heat of the moment I often appreciate it). 

For the sake of completion, I will note that AFAICT, none of these (coded racist) interactions have happened professionally in EA. There's an important caveat that the statistical nature of discrimination makes it hard for me to be sure of course, but my experience with other systems is that it is often not all that subtle.

Replies from: Charles He
comment by Charles He · 2021-10-06T07:13:27.235Z · EA(p) · GW(p)

Thank you for writing this. There is a lot of personal insight and color to this answer, and I think this informed me and other readers. 

I feel like it is appropriate to respond by sharing some personal experience, but I don't really know what to immediately say.  This is not because of political correctness/self-censoring but because there is a lot of personal depth involved and I’m worried I will not give an insightful and fully True answer (and I think there is low demand).

comment by Davidmanheim · 2021-10-05T09:46:44.362Z · EA(p) · GW(p)

First, I agree that racism isn't the most worrying criticism of longtermism - though is the one that has been highlighted recently. But it is a valid criticism of at least some longtermist ideas, and I think we should take this seriously. Sean's argument is one sketch of a real problem, though I think there is a broader point about racism in existential risk reduction, which I make below. But there is also more to longtermism than preventing extinction risks, which is what you defended. As the LARB article notes, transhumanism borders on some very worrying ideas, and there is non-trivial overlap with the ideas of communities which emphatically embrace racism. (And for that reason the transhumanist community has worked hard to distance itself from those ideas.)

And even within X-risk reduction. it's not the case that attempts to reduce existential risks are obviously on their own a valid excuse for behavior that disadvantages others. For example, a certainty of faster western growth that disadvantages the world's poor for a small reduction in risk of human extinction a century from now is a tradeoff that disadvantages others, albeit probably one I would make, were it up to me. But essential to the criticism is that I shouldn't decide for them. And if utilitarian views about saving the future are contrary to the views of most of humanity, longermists should be very wary of unilateralism, or at least think very, very carefully before deciding to ignore others' preferences to "help" them. 

Replies from: Halstead, IanDavidMoss
comment by John G. Halstead (Halstead) · 2021-10-05T09:55:16.859Z · EA(p) · GW(p)

It seems strange to criticise longtermists on the basis that hypothetical actions that they might take (but haven't taken)  disadvantage certain demographic groups. If I were going to show that they were racist (a very serious and reputation-destroying charge), I would show that some of the things that they have actually done were actually bad for certain demographic groups. I just can't think of any example of this. 

Replies from: Davidmanheim, Guy Raveh
comment by Davidmanheim · 2021-10-05T10:05:07.591Z · EA(p) · GW(p)

It also seems strange to defend longtermists as only being harmful in theory, since the vast majority of longtermism is theory, and relatively few actions have been taken. That is, almost all longtermist ideas so far have implications which are currently only hypothetical.

But there is at least one concrete thing that has happened - many people in effective altruism who previously worked on and donated to near-term causes in global health and third world poverty have shifted focus away from those issues. And I don't disagree with that choice, but if that isn't an impact of longtermism which counterfactually harms the global poor, what do you think would qualify?

Replies from: tessa, Halstead, Halstead
comment by tessa · 2021-10-05T12:58:19.764Z · EA(p) · GW(p)

I just want to highlight your second point― resource allocation within the movement away from the global poor and towards longtermsism― seems to be a big part of what is concretely criticized in the Current Affairs piece. Quoting:

This means that if you want to do the most good, you should focus on these far-future people rather than on helping those in extreme poverty today. As [Hilary Greaves and Will MacAskill] write, “for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1,000) years, focusing primarily on the further-future effects. Short-run effects act as little more than tie-breakers.”

...

Since our resources for reducing existential risk are finite, Bostrom argues that we must not “fritter [them] away” on what he describes as “feel-good projects of suboptimal efficacy.” Such projects would include, on this account, not just saving people in the Global South—those most vulnerable, especially women—from the calamities of climate change, but all other non-existential philanthropic causes, too.

This doesn't seem to me like a purely hypothetical harm. If you value existing people much more than potential future people (not an uncommon moral intuition) then this is concretely bad, especially since the EA community is able to move around a lot of philanthropic capital.

Replies from: Halstead
comment by John G. Halstead (Halstead) · 2021-10-05T14:05:34.665Z · EA(p) · GW(p)

Yes but the counter-argument is that longtermists don't accept the antecedent - they don't value current people more than future people. And if you don't accept the antecedent then it could equally be said that near-termist people are inflicting harm on non-white people. So, the argument doesn't take us anywhere

Replies from: tessa
comment by tessa · 2021-10-05T15:23:56.105Z · EA(p) · GW(p)

Fair enough; it's unsurprising that a major critique of longtermism is "actually, present people matter more than future people". To me, a more productive framing of this criticism than racist/non-racist is about longtermist indifference to redistribution. I've seen various recent critiques quoting the following paragraph of Nick Beckstead's thesis:

Saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive. By ordinary standards—at least by ordinary enlightened humanitarian standards—saving and improving lives in rich countries is about equally as important as saving and improving lives in poor countries, provided lives are improved by roughly comparable amounts. But it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.

The standard neartermist response is "all other things are definitely not equal, it's much easier to save a life in a poor country than a rich country", while the standard longtermist response is (I think) "this is the wrong comparison to pay attention to, we should focus on protecting humanity's potential". Given this difference, I disagree a little with this bit of the OP:

the motivations for the part of the community which embraces longtermism still includes Peter Singer's embrace of practical ethics and effective altruist ideas like the Giving Pledge

in that some of the foundational values embedded in Peter Singer's writings (e.g. The Life You Can Save) strike me as redistributive commitments. This is very much reflected in the quote from Sanjay [EA · GW] included in the OP. As far as I can tell (reading the EA Forum, The Precipice, and various Bostrom papers) longtermist philosophy typically does not emphasize redistribution or fairness as core values, but instead focuses on the overwhelming value of the far future.

(That said, I have seen some fairness-based arguments that future people are a constituency whose interests are underweighted politically, for example in response to the proposed UN Special Envoy for Future Generations [EA · GW].)

Replies from: Linch, Davidmanheim
comment by Linch · 2021-12-05T21:39:46.227Z · EA(p) · GW(p)

in that some of the foundational values embedded in Peter Singer's writings (e.g. The Life You Can Save) strike me as redistributive commitments.

One thing to note is that redistributive commitments flow from impartial utilitarianism as well as the weaker normative commitments that Singer espouses as a largely empirical claim about a) human psychology and b) the world we live in.

Singer's strong principle: “If it is in our power to prevent something bad from happening, without thereby sacrificing anything of comparable moral importance, we ought, morally, to do it.” 

Singer's weak principle: “If it is in our power to prevent something very bad from happening, without sacrificing anything morally significant, we ought, morally, to do it.”

I understood the outer framing of the drowning child etc as making not only normative claims about what's right to do in the abstract but also empirical claims about the best way to apply those normative principles in the world we live in. I think the idea that existential risk is very bad and that we are morally compelled to stop it if we aren't sacrificing things of comparable moral significance[1]  is fully consistent with Singerian notions.

[1] or that both existential risk and present suffering is morally significant, so choosing one over the other is superergoatory under Singer's principles, but not necessarily under classical utilitarianism.

comment by Davidmanheim · 2021-12-05T13:03:14.757Z · EA(p) · GW(p)

I would note that Toby and others in the long-termist camp do, in fact, very clearly embrace "the foundational values embedded in Peter Singer's writings." I agree that some people who embrace long-termism could decide to do so on other bases than impartial utilitarianism or similar arguments which agree with both redistribution and some importance of the long term, but I don't hear them involved in the discussions, and so I don't think it works as a criticism when the actual people do also advocate for near-term redistributive causes.

Replies from: tessa
comment by tessa · 2021-12-05T18:18:22.571Z · EA(p) · GW(p)

I don't think I quite understand this reply. Are you saying that (check all that apply):

  1. In your experience, the people involved in discussions do embrace redistribution and fairness as core values, they are just placing more value on future people.
  2. Actual longtermists also advocate for near-term redistributive causes, so criticism about resource allocation within the movement away from the global poor and towards longtermism doesn't make sense (i.e. it's not zero-sum).
  3. Redistributive commitments are only one part of the "foundational values", and Toby and others in the longtermist camp are still motivated by the same underlying impartial utilitarianism, so pointing at less emphasis on redistribution is an unfair nitpick.
Replies from: Davidmanheim
comment by Davidmanheim · 2021-12-07T11:15:10.944Z · EA(p) · GW(p)

I think all of these are true, but I was pointing to #2 specifically.

comment by John G. Halstead (Halstead) · 2021-10-05T10:34:57.072Z · EA(p) · GW(p)

Also, the demographic criticism also applies to EAs who are working on global development: people in that area also skew white and highly educated. 

People who work on farm animal welfare are not focused on the global poor either, but this seems to me an extremely flimsy basis on which to call them racist.

Replies from: Davidmanheim
comment by Davidmanheim · 2021-10-05T10:41:57.806Z · EA(p) · GW(p)

Note: I did not call anyone racist, other than to note that there are groups which embrace some views which themselves embrace that label - but on review, you keep saying that this is about calling someone racist, whereas I'm talking about unequal impacts and systemic impacts of choices - and I think this is a serious confusion which is hampering our conversation. 

Replies from: Halstead
comment by John G. Halstead (Halstead) · 2021-10-05T13:58:24.458Z · EA(p) · GW(p)

Perhaps I have misunderstood, but I interpreted your post as saying we should take the two critiques of longtermism seriously. I think the quality of the critiques is extremely poor, and am trying to explain why. 

Replies from: Davidmanheim
comment by Davidmanheim · 2021-10-06T09:47:14.373Z · EA(p) · GW(p)

I might have been unclear. As I said initially, I claim it's good to publicly address concerns about "the (indisputable) fact that avoiding X-risks can be tied to racist or eugenic historical precedents", and this is what the LARB piece actually discussed.  And I think that slightly more investigation into the issue should have convinced the author that any concerns about continued embrace of the eugenic ideas, or ignorance of the issues, were misplaced. I initially pointed out that specific claims about longtermism being similar to eugenics are "farcical." More generally, I tried to point out in this post that many the attacks are unserious or uniformed- as Scott pointed out in his essay, which this one quoted and applied to this situation, the criticisms aren't new. 

More serious attempts at dialog, like some of the criticisms in the LARB piece are not bad-faith or unreasonable claims, even if they fail to be original. And I agree that "we cannot claim to take existential risk seriously — and meaningfully confront the grave threats to the future of human and nonhuman life on this planet — if we do not also confront the fact that our ideas about human extinction, including how human extinction might be prevented, have a dark history."  But I also think it's obvious that others working on longtermism agree, so the criticism seems to be at best a weak man argument. Unfortunately, I think we'll need to wait another year or so for Will's new book, which I understand has a far more complete discussion of this, much of which was written before either of these pieces were published.

Replies from: BrianTan
comment by BrianTan · 2021-10-06T10:40:25.162Z · EA(p) · GW(p)

Sorry to jump in the conversation, but Toby Ord has another book? Maybe you're talking about Will MacAskill's upcoming book on longtermism?

Replies from: Davidmanheim
comment by Davidmanheim · 2021-10-07T08:00:13.197Z · EA(p) · GW(p)

Right - fixed. Whoops!

comment by John G. Halstead (Halstead) · 2021-10-05T10:12:53.631Z · EA(p) · GW(p)

On the first para, that doesn't seem to me to be true of work on AI safety or biorisk, as I understand it. 

On the second para, the first thing to say is that longtermists shouldn't be the target of particular criticism on this score - almost no-one is wholly focused on improving the welfare of the global poor. If this decision by longtermists is racist then so is almost everyone else in the world. 

Secondly, no I don't think it counterfactually harms the global poor. That only works if you take a person-affecting view of people's interests. If you count future people, then the shift is counterfactually very beneficial for the global poor and for both white and non-white people. 

Replies from: MichaelStJules, Davidmanheim
comment by MichaelStJules · 2021-10-06T01:55:44.997Z · EA(p) · GW(p)

I don't think it's necessarily very good for the global poor as a changing group defined by their poverty, depending on how quickly global poverty declines. There's also a big drop in the strength of evidence in this shift, so it depends on how skeptical you are.

Plus, person-affecting views (including asymmetric ones) or at least somewhat asymmetric views (e.g. prioritarianism) are not uncommon, and I would guess especially among those concerned with the global poor and inequality. Part of the complaint made by some is about ethical views that say extinction would be an astronomical loss and deserves overwhelming priority as a result, over all targeted anti-poverty work. This is a major part of the disagreement, not something to be quickly dismissed.

comment by Davidmanheim · 2021-10-05T10:37:47.418Z · EA(p) · GW(p)

I disagree about at least some Biorisk, as the allocation of scarce resources in public health has distributive effects, and some work on pandemic preparedness has reduced focus for near-term campaigns on vaccinations. I suspect the same is true, to a lesser extent, in pushing people who might otherwise work on near-term ML bias to work on longer term concerns. But as this relates to your second point, and the point itself, I agree completely, and don't think it's reasonable to say it's blameworthy or morally unacceptable, though as I argued, I think we should worry about the impacts.
 
But the last point confuses me. Even ignoring person-affecting or not, shifting efforts to help John can (by omission, at the very least,) injure Sam. "The global poor" isn't a uniform pool, and helping those who are part of "the global poor" in a century by, say, taxing someone now is a counterfactual harm for the person now. If you aggregate the way you prefer, this problem goes away, but there are certainly ethical views, even within utilitarianism, where this isn't acceptable - for example, if the future benefit is discounted so heavily that it's outweighed by the present harm.

Replies from: Halstead, Halstead
comment by John G. Halstead (Halstead) · 2021-10-05T14:03:55.032Z · EA(p) · GW(p)

On your first para, I was responding to this claim: "It also seems strange to defend longtermists as only being harmful in theory, since the vast majority of longtermism is theory, and relatively few actions have been taken. That is, almost all longtermist ideas so far have implications which are currently only hypothetical." I said that most work on bio and AI was not just theory but was applied. I don't think the things you say in the first para present any evidence against that claim, but rather they seem to grant my initial point. 

Replies from: Davidmanheim
comment by Davidmanheim · 2021-10-05T16:29:20.390Z · EA(p) · GW(p)

I agree that there are some things in Bio and AI that are applied - though the vast majority of the work in both areas is still fairly far from application. But my point which granted your initial point was responding to "I don't think it counterfactually harms the global poor."

Replies from: Halstead
comment by John G. Halstead (Halstead) · 2021-10-05T19:46:13.553Z · EA(p) · GW(p)

This is question begging: it only counterfactually harms the poor on a person-affecting view of ethics, which longtermists reject

Replies from: Chi, Davidmanheim
comment by Chi · 2021-10-05T21:23:15.757Z · EA(p) · GW(p)

person-affecting view of ethics, which longtermists reject

I'm a longtermist and I don't reject (asymmetric) person(-moment-)affecting views, at least not those that think necessary ≠ only present people. I would be very hard-pressed to give a clean formalization of necessary people though. I think it's bad if effective altruists think longtermism can only be justified with astronomical waste-style arguments and not at all if someone has person-affecting intuitions. (Staying in a broadly utilitarian framework. There are, of course, also obligation-to-ancestor-type justifications for longtermism or similar.) The person-affecting part of me just pushes me in the direction of caring more about trajectory change than extinction risk.

Since I could only ever give very handwavey defenses of person-affecting views and even handwaveier explanations of my overall moral views: Here's a paper by someone that AFAICT is at least sympathetic to longtermism and discusses asymmetric person-affecting views. (I have to admit I never got around to read the paper.) (Writing a paper on how an asymmetric person-affecting view obviously also doesn't necessarily mean that the author doesn't actually reject person-affecting views)

comment by Davidmanheim · 2021-10-07T08:08:23.156Z · EA(p) · GW(p)

Is that true?

Many current individuals will be worse off when resources don't go to them, for instance, because they are saving future lives, versus when they do, for instance, funds focused on near-term utilitarian goals like poverty reduction. And if, as most of us expect, the world's wealth will continue to grow, effectively all future people who are helped by existential risk reduction are not what we'd now consider poor. You can defend this via the utilitarian calculus across all people, but that doesn't change the distributive impact between groups.

Replies from: Halstead
comment by John G. Halstead (Halstead) · 2021-10-07T09:31:48.970Z · EA(p) · GW(p)

Equally, many future people will be worse-off than they would have been if we don't reduce extinction risks. The claim is about the net total impact on non-white people

Replies from: Davidmanheim
comment by Davidmanheim · 2021-10-11T17:49:38.066Z · EA(p) · GW(p)

Your definition of problematic injustice seems far too narrow, and I explicitly didn't refer to race in the previous post. The example I gave was that the most disadvantaged people are in the present, and are further injured - not that non-white people (which under current definitions will describe approximately all of humanity in another half dozen generations) will be worse off.

comment by John G. Halstead (Halstead) · 2021-10-05T10:43:47.976Z · EA(p) · GW(p)

On the second point, yes I agree that there are some popular views on which we would discount or ignore future people. I just don't think that they are plausible. If someone held a view which said that they only count the interests of white future people, I think it would be quite clear that this was bad for the interests of non-white people in a very important way. Therefore, if I ignore all future people, then I ignore all future non-white people, which is bad for their interests in a very important way

Replies from: Davidmanheim
comment by Davidmanheim · 2021-10-05T10:54:21.102Z · EA(p) · GW(p)

As I said above in a different comment thread, it seems clear we're talking past one another.

Yes, being racist would be racist, and no, that's not the criticism. You said that "there are some popular views on which we would discount or ignore future people. I just don't think that they are plausible." And I think part of the issue is exactly this dismissiveness. As a close analogy, imagine someone said "there are some popular views where AI could be a risk to humans. I just don't think that these are plausible," and went on to spend money building ASI instead of engaging with the potential that they are wrong, or taking any action to investigate or hedge that possibility.

Replies from: Halstead
comment by John G. Halstead (Halstead) · 2021-10-05T13:55:03.836Z · EA(p) · GW(p)

I don't really understand your response. Most of the people who argue for a longtermist ethical standpoint have spent many many years thinking about the possibility that they are wrong and arguing against person-affecting views, during their philosophy degrees. I could talk to you for several weeks about the merits and demerits of such views and the published literature on them. 

"Yes, being racist would be racist, and no, that's not the criticism." I don't really understand your point here. 

Replies from: Davidmanheim
comment by Davidmanheim · 2021-10-05T16:38:28.900Z · EA(p) · GW(p)

My point is that many people who disagree with the longtermist ethical viewpoint also spent years thinking about the issues, and dismissing the majority of philosophers, and the vast, vast majority of people's views as not plausible, is itself one of the problems I tried to highlight on the original post when I said that a small group talking about how to fix everything should raise flags.

And my point about racism is that criticism of choices and priorities which have a potential to perpetuate existing structural disadvantages  and inequity is not the same as calling someone racist.

Replies from: Halstead
comment by John G. Halstead (Halstead) · 2021-10-05T19:45:06.852Z · EA(p) · GW(p)

The standards in the first para appear to be something like 'you can never say that something is implausible if some philosophers believe it'. That seems like a pretty weird standard. Another way of making saying it is implausible is just saying that "I think it is probably false". 

Near-termists are also a small group talking about how to fix everything. 

this is perhaps too meta, but on the second para, if that is what you meant, I don't understand how it is a response to the comment your response was to. 

Replies from: Davidmanheim
comment by Davidmanheim · 2021-10-07T08:13:07.340Z · EA(p) · GW(p)

I'm pointing out that you're privileging your views over those of others - not "some philosophers," but "most people."

And unless you're assuming a fairly strong version of moral realism, this isn't a factual question, it's a values question - so it's strange to me to think that we should get to assume we're correct despite being a small minority, without at least a far stronger argument that most people would agree with longermism if properly presented - and I think Stefan Schubert's recent work implies that is not at all clear.

Replies from: Halstead
comment by John G. Halstead (Halstead) · 2021-10-07T09:27:23.724Z · EA(p) · GW(p)

Any time you take a stance on anything you are privileging your view over some other people. Your argument also applies to people working on animal welfare and on global poverty. In surveys, most people don't even seem to care about saving more lives than less.

If we are going to go down the route of saying that what EAs do should be decided by the majority opinion of the current global population, then that would be the end of EA of any kind. As I understand it, your claim is that the total view is false (or we don't have reason to act on it) because the vast majority of the world population do not believe in the total view. Is that right? 

It is difficult not to come up with examples. In 1500, most people would have believed that violence against women and slavery were permissible. Would that have made you stop campaigning to bring an end to that? These are also values, after all

comment by Guy Raveh · 2021-12-02T09:46:37.443Z · EA(p) · GW(p)

It doesn't make sense to think that you can flush racism etc. out of a system run by affluent white westerners by self reflection. Maybe that could highlight some points that need to be addressed, but it is sure to miss others that different perspectives would spot.

So we absolutely cannot claim "our multimillion dollar project isn't racist because we can't find anything racist about it." Such a project, if it does not include a much more diverse population in its decision making, is bound to end up harming those unrepresented.

Replies from: Halstead
comment by John G. Halstead (Halstead) · 2021-12-02T11:39:21.579Z · EA(p) · GW(p)

 that also applies to people working on global development as well, and to pretty much all philanthropy. So, there is nothing special about longtermism on this score

Replies from: Davidmanheim
comment by Davidmanheim · 2021-12-03T11:51:05.454Z · EA(p) · GW(p)

Sure. So each one should be interested in outside feedback about whether it seems racist, or fail on other counts -and take it seriously when outsiders say it is a concern.

Replies from: Halstead
comment by John G. Halstead (Halstead) · 2021-12-04T11:16:35.677Z · EA(p) · GW(p)

But you have presented this post as something that is specific to longtermism. Do you not think it would have been more informative/less misleading to say that this also applies to all social movements, including those working to improve health in Africa?

Replies from: Davidmanheim
comment by Davidmanheim · 2021-12-05T12:58:24.194Z · EA(p) · GW(p)

No, because no-one is really providing this specific bit of outside feedback to most of those groups. As the post says, there have been recent attacks *on longtermism*.

Replies from: Halstead
comment by John G. Halstead (Halstead) · 2021-12-05T16:27:23.010Z · EA(p) · GW(p)

there are also attacks on all global development charity for being colonialist. 

Also, you are giving more credit to the critiques than they deserve. One of them is obviously written by someone who clearly doesn't believe what he is saying but is instead redressing perceived personal slights by some of the people and organisations concerned, in particular with respect to turning him down for jobs, being unwilling to publicise his book,  criticising his work in public etc. For someone who thinks that these longtermist orgs are genocidal, he has applied for jobs at an awful lot of them!

Replies from: Davidmanheim
comment by Davidmanheim · 2021-12-07T11:26:26.498Z · EA(p) · GW(p)

I understand that Phil Torres was banned for the forum, I think for good reason. But I don't think that your reply here is acceptable given the norms on the forum for polite disagreement - especially because it's both mean-spirited, and in parts simply incorrect.

That said, I am presenting the fact that his claims are being taken seriously by others, as the second article shows, and yes, steelmanning his view to an extent that I think is reasonable - especially since certain of his critiques are both anticipated by, and have been further extensively discussed by people in EA since. Regardless of whether Phil believes them - and it's clear that he does - the critiques aren't a fringe position outside of EA, and beating up the strawman while ignoring the many, many people who agree seems at best disingenuous.

Finally, global development has spent a huge amount of time and effort addressing the reasonable criticisms of colonialism, especially given the incredible  damage that such movements have caused in many instances. (Even though it's been positive on net, that doesn't excuse avoidable damage - which seems remarkably similar to what I'm concerned about here.) In any case, saying that global development is also attacked, as if that means longtermists couldn't be similarly guilty, seems like a very, very strange defense.

Replies from: Halstead
comment by John G. Halstead (Halstead) · 2021-12-07T13:29:08.446Z · EA(p) · GW(p)

I don't see how it is either mean-spirited or incorrect. Which part is incorrect? 

The context is crucial here because it illustrates that he is not arguing in good faith, which is quite clear to anyone who knows the background to this. Only two years ago, he argued at length that engineered pandemics and AGI are very important challenges. These are the two central claims of longtermism and the main problems that they fund. So, one would think he would be very pro-longtermism. He has a website called 'xriskology' and a several books on existential risk. It is only since longtermists got on the wrong side of him that he decided that longtermism is racist. He has also randomly made unfounded rape and paedophilia accusations against certain people over the last few years. He is not a credible critic, and to ignore that simple fact is very strange. 

On your last paragraph

  • you said: "no-one is really providing this specific bit of outside feedback [that they risk racism] to most of those groups". 
  • I said this wasn't true because people eg say that global health is colonialist all the time.
  • You then characterise me as "saying that global development is also attacked, as if that means longtermists couldn't be similarly guilty". 

Obviously, this was not what I was doing. I was arguing against the initial thing that you said (which you have now conceded). This is now the second time this has happened in this conversation, so I think we should probably draw this to a close. 

Replies from: Halstead
comment by John G. Halstead (Halstead) · 2021-12-07T15:35:06.855Z · EA(p) · GW(p)

to clarify, when I said he had applied for jobs at the organisations he criticises, I didn't mean to be criticising him for that (I have also applied at jobs at many of those orgs and been rejected). My point was that it is a bit improbable that he has had such a genuine intellectual volte-face given this fact

comment by IanDavidMoss · 2021-10-06T00:30:29.557Z · EA(p) · GW(p)

But essential to the criticism is that I shouldn't decide for them.

It seems like this is a central point in David's comment, but I don't see it addressed in any of what follows. What exactly makes it morally okay for us to be the deciders?

It's worth noting that in both US philanthropy and the international development field, there is currently a big push toward incorporating affected stakeholders and people with firsthand experience with the issue at hand directly into decision-making for exactly this reason. (See participatory grantmaking, the Equitable Evaluation Initiative, and the process that fed into the Sustainable Development Goals, e.g.) I recognize that longtermism is premised in part on representing the interests of moral patients who can't represent themselves. But the question remains: what qualifies us to decide on their behalf? I think the resistance to longtermism in many quarters has much more to do with a suspicion that the answer to that question is "not much" than any explicit valuation of present people over future people.

comment by Linch · 2021-10-05T22:25:30.166Z · EA(p) · GW(p)

In terms of:

a largely white. educated, western, and male group [EA · GW] talking about how to fix everything should raise flags

I'm curious why this is so. I feel like I get the intuitive pull of suspicious demographic composition is all-else-equal evidence of something wrong, but I have trouble formalizing that intuition into something large and concrete. I guess I just feel like the Bayes factor for such warning signs shouldn't be very high. Obviously intersectionality/interaction effects are real, but if we vary obvious parameters for a social movement's origin I don't feel like our movement should counterfactually be noticeably less concerned. Consider the following phrases:

a largely Chinese, educated, Eastern and male group talking about how to fix everything should not raise flags

a largely white, uneducated, western and male group talking about how to fix everything should not raise flags

a largely white, educated, western and female group talking about how to fix everything should not raise flags

a largely white, educated, western and male group talking about how nothing needs to be fixed should not raise flags

a demographically diverse, uneducated, globalist, gender-neutral group talking about how it's impossible to fix anything should not raise flags

Each of the above statements continue to seem intuitively suspicious to me, which is at least some evidence that we're confused here if we ascribe overly high import to the demographics of origin. 

Replies from: tkwa
comment by Thomas Kwa (tkwa) · 2021-10-06T03:32:32.728Z · EA(p) · GW(p)

I think all these groups need to be concerned, but about different things:

  • a largely white, educated, western, and male group talking about how to fix everything should (i) not repeat mistakes like racist eugenics, colonialism, etc., historically made by such groups, (ii) also think about other possible problems caused by its members being privileged/powerful
  • an uneducated group should not repeat mistakes made by past such groups (perhaps famines and atrocities caused by the Great Leap Forward), and anticipate other problems caused by its demographics by ensuring it has good epistemics, talent, and ways of being taken seriously
  • a largely black group should look at mistakes made by other predominantly black groups including black supremacists (perhaps becoming cults), ...
  • a group talking about how it's impossible to fix anything should look at mistakes made by past such groups (perhaps Calvinism), ...
  • Also, any group that isn't demographically diverse might want to become diverse if they think adding voices makes the direction of the movement better.

Some of these concerns can be easily dismissed (the NAACP doesn't need to try especially hard to not become a black supremacist cult because the prior probability of that is very low). But when thinking about plausible failure modes even a ~3:1 Bayes factor from demographics can be important, until we know about the actual causes of these failures and whether they apply to us.

Replies from: Davidmanheim
comment by Davidmanheim · 2021-10-07T08:15:29.383Z · EA(p) · GW(p)

Mostly endorsed. 

Or perhaps more simply, if a small, non-representative group disagrees with the majority of humans, we should wonder why, and given base rates and the outside view, worry about failure modes that have affected similar small groups in the past.

comment by KathrynMecrow · 2021-10-05T14:48:05.234Z · EA(p) · GW(p)

I really love this article, thank you for taking the time to put it together. Obviously, I am biased, but I think a potentially strong second conclusion is that we should continue to take seriously building a community that has strong norms around making sure we attract and retain the best possible people from a diversity of approaches and expertise. I worry much more about our failure mode being we inadvertently form an echo chamber and miss or overlook how to weigh the importance or likelihood of potential ways we might be wrong/ potentially doing harm than I worry about overt bad faith. 

comment by tessa · 2021-10-05T14:01:15.943Z · EA(p) · GW(p)

I want to note not just the skulls of the eugenic roots of futurism, but also the "creepy skull pyramid" of longtermists suggesting actions that harm current people in order to protect hypothetical future value.

This goes anywhere from suggestions to slow down AI progress [EA · GW], which seems comfortably within the Overton Window but risks slowing down economic growth and thus slowing reductions in global poverty, to the extreme actions suggested in some Bostrom pieces. Quoting the Current Affairs piece:

While some longtermists have recently suggested that there should be constraints on which actions we can take for the far future, others like Bostrom have literally argued that preemptive violence and even a global surveillance system should remain options for ensuring the realization of “our potential.”

Mind you, I don't think these tensions are unique to longtermism. In biosecurity, even if you're focused entirely on the near-term, there are a lot of trade-offs and tensions between preventing harm and securing benefits.

You might have really robust export controls that never let pathogens be shipped around the world... but that will make it harder for developing countries to build up their biomanufacturing capacity. Under the bioweapons convention you have a lot of diplomats arguing about balancing Article IV ("any national measures necessary to prohibit and prevent the development, production, stockpiling, acquisition or retention of biological weapons") and Article X ("the fullest possible exchange of equipment, materials and information for peaceful purposes"). That said, I think longtermist commitments can increase the relative importance of preventing harm.

Replies from: Davidmanheim
comment by Davidmanheim · 2021-10-05T16:19:47.447Z · EA(p) · GW(p)

Thanks - I largely agree, and am similarly concerned about the potential for such impacts, as was discussed in the thread with John Halstead.

As an aside, I think Harper's LARB article was being generous in calling Phil's current affairs article "rather hyperbolic," and think its tone and substance are an unfortunate distraction from various more reasonable criticisms Phil himself has suggested in the past.

comment by michaelchen · 2021-10-06T21:25:00.114Z · EA(p) · GW(p)

These largely focus on the (indisputable) fact that avoiding X-risks can be tied to racist or eugenic historical precedents. This should be worrying;

I think most social movements can be traced to some sort of unsavory historical precedent. For example:

I provide these examples not to criticize these movements but because I think these historical connections are nearly irrelevant for assessing whether a present-day movement is valid or what it should be working to improve on. (I'll allow that examining history is essential if we want to adopt the framing of redressing past harms.) What's more productive is to look at problematic behavior in the present and look at resolving that, and I don't see what history-based critiques add if we do that.

I think the standard apologetic response for an organization involved in a social movement would be to make a blog post describing these unsavory historical precedents and then calling for more action for diversity, equity, and inclusion. But that might not be a good strategy for longtermist organizations. The Tuskegee study is so well-known in the United States as an example of medical racism that it doesn't cost anything for a hospital to write a blog post about it, while the situation isn't analogous for longtermism.

Replies from: Davidmanheim, abrahamrowe
comment by Davidmanheim · 2021-10-07T08:20:44.743Z · EA(p) · GW(p)

I think that ignoring historical precedent is exactly what Scott was pointing out we aren't doing in his post, and I think the vast majority of EAs think it would be a mistake to do so now.

My point was that we're aware of the skulls, and cautious. Your response seems to be "who cares about the skulls, that was the past. I'm sure we can do better now." And coming from someone who is involved in EA, hearing that view from people interested in changing the world really, really worries me - because we have lots of evidence from studies of organizational decision making and policy that ignoring what went wrong in the past is a way to fail now and in the future.

Replies from: michaelchen
comment by michaelchen · 2021-10-09T01:03:00.967Z · EA(p) · GW(p)

I have a hard time seeing longtermism being at risk for embracing eugenics or racism. But it might be interesting to look at the general principles for why people in the past advocated eugenics or racism—perhaps, insufficient respect or individual autonomy—and try to learn from those more general lessons. Is that what you're arguing for in your post?

Replies from: Davidmanheim
comment by Davidmanheim · 2021-10-11T17:45:31.206Z · EA(p) · GW(p)

Yes. The ways that various movements have gone wrong certainly differs, and despite the criticism related to race, which I do think is worth addressing,  I'm not primarily worried that longtermists will end up repeating specific  failure modes [EA · GW] - different movements fail differently.

comment by abrahamrowe · 2021-10-06T23:05:58.128Z · EA(p) · GW(p)

It seems pretty bizarre to me to say that these historical examples are not at all relevant for evaluating present day social movements. I think it's incredibly important that socialists, for example, reflect on why various historical folks and states acting in the name of socialism caused mass death and suffering, and likewise for any social movement look at it's past mistakes, harms, etc., and try to reevaluate their goals in light of that. 

To me, the examples you give just emphasize the post's point — I think it would be hard to find someone who did a lot of thinking on socialist topics who thought that there were no lessons or belief changes should happen after human rights abuses in the Soviet Union were revealed. And if someone didn't think there were lessons there for how to approach making the world better today, that it would seem completely unreasonable.

I also don't think the original post was asking longtermist orgs to make blog posts calling for action on diversity, equity, and inclusion. I think it was doing something more like asking longtermists to genuinely reflect on whether or not unsavory aspects of the intellectual movement's history are shaping the space today, etc.

comment by Miranda_Zhang (starmz12345@gmail.com) · 2021-10-09T01:02:49.963Z · EA(p) · GW(p)

Really appreciate and resonate with the spirit of this post. Something that's always intrigued me is the distance between the EA-flavored futurism that permeates the current longtermism community, and Afrofuturism. Both communities craft visions of the future, including utopias and dystopias, and consider themselves 'out of the norm.'

I suspect it's in part because the EA community generally does not talk much/explicitly about race and racial justice.

comment by Ramiro · 2021-10-08T15:49:38.986Z · EA(p) · GW(p)

I'd like to see how this "skull critique" will develop now that UN has adopted a kind of longtermist stance.