Democratising Risk - or how EA deals with critics

post by CarlaZoeC · 2021-12-28T15:05:43.141Z · EA · GW · 309 comments

Luke Kemp and I just published a paper which criticises existential risk for lacking a rigorous and safe methodology: 

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3995225

It could be a promising sign for epistemic health that the critiques of leading voices come from early career researchers within the community. Unfortunately, the creation of this paper has not signalled epistemic health. It has been the most emotionally draining paper we have ever written.

We lost sleep, time, friends, collaborators, and mentors because we disagreed on: whether this work should be published, whether potential EA funders would decide against funding us and the institutions we're affiliated with, and whether the authors whose work we critique would be upset.

We believe that critique is vital to academic progress. Academics should never have to worry about future career prospects just because they might disagree with funders. We take the prominent authors whose work we discuss here to be adults interested in truth and positive impact. Those who believe that this paper is meant as an attack against those scholars have fundamentally misunderstood what this paper is about and what is at stake. The responsibility of finding the right approach to existential risk is overwhelming. This is not a game. Fucking it up could end really badly.

What you see here is version 28. We have had approximately 20 + reviewers, around half of which we sought out as scholars who would be sceptical of our arguments. We believe it is time to accept that many people will disagree with several points we make, regardless of how these are phrased or nuanced. We hope you will voice your disagreement based on the arguments, not the perceived tone of this paper.

We always saw this paper as a reference point and platform to encourage greater diversity, debate, and innovation. However, the burden of proof placed on our claims was unbelievably high in comparison to papers which were considered less “political” or simply closer to orthodox views. Making the case for democracy was heavily contested, despite reams of supporting empirical and theoretical evidence. In contrast, the idea of differential technological development, or the NTI framework, have been wholesale adopted despite almost no underpinning peer-review research. I wonder how much of the ideas we critique here would have seen the light of day, if the same suspicious scrutiny was applied to more orthodox views and their authors.

We wrote this critique to help progress the field. We do not hate longtermism, utilitarianism or transhumanism,. In fact, we personally agree with some facets of each. But our personal views should barely matter. We ask of you what we have assumed to be true for all the authors that we cite in this paper: that the author is not equivalent to the arguments they present, that arguments will change, and that it doesn’t matter who said it, but instead that it was said.

The EA community prides itself on being able to invite and process criticism. However, warm welcome of criticism was certainly not our experience in writing this paper.

Many EAs we showed this paper to exemplified the ideal. They assessed the paper’s merits on the basis of its arguments rather than group membership, engaged in dialogue, disagreed respectfully, and improved our arguments with care and attention. We thank them for their support and meeting the challenge of reasoning in the midst of emotional discomfort. By others we were accused of lacking academic rigour and harbouring bad intentions. 

We were told by some that our critique is invalid because the community is already very cognitively diverse and in fact welcomes criticism. They also told us that there is no TUA, and if the approach does exist then it certainly isn’t dominant.  It was these same people that then tried to prevent this paper from being published. They did so largely out of fear that publishing might offend key funders who are aligned with the TUA. 

These individuals—often senior scholars within the field—told us in private that they were concerned that any critique of central figures in EA would result in an inability to secure funding from EA sources, such as OpenPhilanthropy. We don't know if these concerns are warranted. Nonetheless, any field that operates under such a chilling effect is neither free nor fair. Having a handful of wealthy donors and their advisors dictate the evolution of an entire field is bad epistemics at best and corruption at worst. 

The greatest predictor of how negatively a reviewer would react to the paper was their personal identification with EA. Writing a critical piece should not incur negative consequences on one’s career options, personal life, and social connections in a community that is supposedly great at inviting and accepting criticism.

Many EAs have privately thanked us for "standing in the firing line" because they found the paper valuable to read but would not dare to write it. Some tell us they have independently thought of and agreed with our arguments but would like us not to repeat their name in connection with them. This is not a good sign for any community, never mind one with such a focus on epistemics. If you believe EA is epistemically healthy, you must ask yourself why your fellow members are unwilling to express criticism publicly. We too considered publishing this anonymously. Ultimately, we decided to support a vision of a curious community in which authors should not have to fear their name being associated with a piece that disagrees with current orthodoxy. It is a risk worth taking for all of us. 

The state of EA is what it is due to structural reasons and norms (see this article [EA · GW]). Design choices have made it so, and they can be reversed and amended. EA fails not because the individuals in it are not well intentioned, good intentions just only get you so far.

EA needs to diversify funding sources by breaking up big funding bodies and by reducing each orgs’ reliance on EA funding and tech billionaire funding, it needs to produce academically credible work, set up whistle-blower protection, actively fund critical work, allow for bottom-up control over how funding is distributed, diversify academic fields represented in EA, make the leaders' forum and funding decisions transparent, stop glorifying individual thought-leaders, stop classifying everything as info hazards…amongst other structural changes. I now believe EA needs to make such structural adjustments in order to stay on the right side of history. 

309 comments

Comments sorted by top scores.

comment by William_MacAskill · 2021-12-29T21:37:48.902Z · EA(p) · GW(p)

Hi - Thanks so much for writing this. I'm on holiday at the moment so have only have been able to  quickly  skim your post and paper.  But, having got the gist, I just wanted to say:
(i) It really pains me to hear that you lost time and energy as a result of people discouraging you from publishing the paper, or that you had to worry over funding on the basis of this. I'm sorry you had to go through that. 
(ii) Personally, I'm  excited to fund or otherwise encourage engaged and in-depth "red team" critical work on either (a) the ideas of EA, longtermism or strong longtermism, or (b) what practical implications have been taken to follow from EA, longtermism, or strong longtermism.  If anyone reading this comment  would like funding (or other ways of making their life easier) to do (a) or (b)-type work, or if you know of people in that position, please let me know at will@effectivealtruism.org.  I'll try to consider any suggestions, or put the suggestions in front of others to consider, by the end of  January. 

Replies from: weeatquince, CarlaZoeC
comment by weeatquince · 2021-12-30T13:42:37.148Z · EA(p) · GW(p)

I just want to say that I think this is a beautifully accepting response to criticism. Not defensive. Says hey yes maybe there is a problem here. Concretely offers time and money and a plan to look into things more. Really lovely, thank you Will. 

comment by CarlaZoeC · 2021-12-31T17:47:16.013Z · EA(p) · GW(p)

Thanks for stating this publically  here Will! 

comment by Nick_Beckstead · 2021-12-30T03:33:04.061Z · EA(p) · GW(p)

Hi Carla and Luke, I was sad to hear that you and others were concerned that funders would be angry with you or your institutions for publishing this paper. For what it's worth, raising these criticisms wouldn't count as a black mark against you or your institutions in any funding decisions that I make. I'm saying this here publicly in case it makes others feel less concerned that funders would retaliate against people raising similar critiques. I disagree with the idea that publishing critiques like this is dangerous / should be discouraged.

Replies from: HoldenKarnofsky, Jonas Vollmer, CarlaZoeC
comment by Holden Karnofsky (HoldenKarnofsky) · 2022-01-01T00:33:28.039Z · EA(p) · GW(p)

+1 to everything Nick said, especially the last sentence. I'm glad this paper was published; I think it makes some valid points (which doesn't mean I agree with everything), and I don't see the case that it presents any risks or harms that should have made the authors consider withholding it. Furthermore, I think it's good for EA to be publicly examined and critiqued, so I think there are substantial potential harms from discouraging this general sort of work.

Whoever told you that funders would be upset by your publishing this piece, they didn't speak for Open Philanthropy. If there's an easy way to ensure they see this comment (and Nick's), it might be helpful to do so.

comment by Jonas Vollmer · 2022-01-02T11:21:07.420Z · EA(p) · GW(p)

+1, EA Funds (which I run) is interested in funding critiques of popular EA-relevant ideas.

comment by CarlaZoeC · 2021-12-31T17:50:15.143Z · EA(p) · GW(p)

Thanks for saying this publically too Nick, this is helpful for anyone who might worry about funding. 

comment by Rubi · 2021-12-29T03:56:53.170Z · EA(p) · GW(p)

I thought the paper itself was poorly argued, largely as a function of biting off too much at once. Several times the case against the TUA was not actually argued, merely asserted to exist along with one or two citations for which it is hard to evaluate if they represent a consensus. Then, while I thought the original description of TUA was accurate, the TUA response to criticisms was entirely ignored. Statements like "it is unclear why a precise slowing and speeding up of different technologies...across the world is more feasible or effective than the simpler approach of outright bans and moratoriums" were egregious, and made it seem like you did not do your research. You spoke to 20+ reviewers, half of which were sought out to disagree with you, and not a single one could provide a case for differential technology? Not a single mention of the difficulty of incorporating future generations into the democratic process?

Ultimately, I think the paper would have been better served by focusing on a single section, leaving the rest to future work. The style of assertions rather than argument and skipping over potential responses comes across as more polemical than evidence-seeking. I believe that was the major contributor to blowback you have received.

I agree that more diversity in funders would be beneficial. It is harmful to all researchers if access to future funding is dependent on the results of their work. Overall, it is unclear from your post the actual extent of the blowback. What does "tried to prevent the paper being published" mean? Is the threat of withdrawn funding real or imagined? Were the authors whose work was criticized angry, and did they take any actions to retaliate?

Finally, I would like to abstract away from this specific paper. Criticisms of the dominant paradigm  limiting future funding and career opportunities is a sign of terrible epistemics in a field. However, poor criticisms of the dominant paradigm limiting future funding and career opportunities is completely valid. The one line you wrote that I think all EAs would agree with is "This is not a game. Fucking it up could end really badly". If the wrong arguments being made would cause harm when believed, it is not only the right but the responsibility of funders to reduce their reach. Of course, the difficulty is in differentiating wrong criticisms from criticisms against the current paradigm, while within the current paradigm. The responsibility of the researcher is to make their case as bulletproofed as possible, and designed to convince believers in the current paradigm. Otherwise, even if their claims are correct, they won't make an impact. The "Effective" part of EA includes making the right arguments to convince the right people, rather than the argument that is cathartic to unleash.

Replies from: Halstead, Davidmanheim, CarlaZoeC
comment by John G. Halstead (Halstead) · 2021-12-29T10:22:55.915Z · EA(p) · GW(p)

I would agree that the article is too wide-ranging. There's a whole host of content ranging from criticisms of expected value theory, arguments for degrowth, arguments for democracy, and then criticisms of specific risk estimates. I agreed with some parts of the paper, but it is hard to engage with such a wide range of topics. 

Replies from: evelynciara
comment by BrownHairedEevee (evelynciara) · 2021-12-29T21:44:19.176Z · EA(p) · GW(p)

arguments for degrowth

Where? The paper doesn't mention economic growth at all.

Replies from: keller_scholl
comment by keller_scholl · 2021-12-30T01:26:40.513Z · EA(p) · GW(p)

The paper doesn't explicitly mention economic growth, but it does discuss technological progress, and at points seems to argue or insinuate against it.

"For others who value virtue, freedom, or equality, it is unclear why a long-term future without industrialisation is abhorrent: it all depends on one’s notion of potential." Personally, I consider a long-term future with a 48.6% child and infant mortality rate  abhorrent and opposed to human potential, but the authors don't seem bothered by this. But they have little enough space to explain how their implied society would handle the issue, and I will not critique it excessively.

There is also a repeated implication that halting technological progress is, at a minimum, possible and possibly desirable.
"Since halting the technological juggernaut is considered impossible, an approach of differential technological development is advocated"
"The TUA rarely examines the drivers of risk generation. Instead, key texts contend that regulating or stopping technological progress is either deeply difficult, undesirable, or outright impossible"
"regressing, relinquishing, or stopping the development of many technologies is often disregarded as a feasible option" implies to me that one of those three options is a feasible option, or is at least worth investigating.

While they don't explicitly advocate degrowth, I think it is reasonable to read them as doing such, as John does. [EA · GW]

Replies from: evelynciara, anonymousEA
comment by BrownHairedEevee (evelynciara) · 2021-12-30T06:26:06.487Z · EA(p) · GW(p)

"For others who value virtue, freedom, or equality, it is unclear why a long-term future without industrialisation is abhorrent: it all depends on one’s notion of potential."

Point taken. Thank you for pointing this out.

"The TUA rarely examines the drivers of risk generation. Instead, key texts contend that regulating or stopping technological progress is either deeply difficult, undesirable, or outright impossible"
"regressing, relinquishing, or stopping the development of many technologies is often disregarded as a feasible option" implies to me that one of those three options is a feasible option, or is at least worth investigating.

I think this is more about stopping the development of specific technologies - for example, they suggest that stopping AGI from being developed is an option. Stopping the development of certain technologies isn't necessarily related to degrowth - for example, many jurisdictions now ban government use of facial recognition technology, and there have been calls to abolish its use, but these are motivated by civil liberties concerns.

comment by anonymousEA · 2021-12-30T02:12:44.164Z · EA(p) · GW(p)

I think this conflates the criticism of the idea of unitary and unstoppable technological progress with opposition to any and all technological progress.

Replies from: keller_scholl
comment by keller_scholl · 2021-12-30T03:48:28.870Z · EA(p) · GW(p)

Suggesting that a future without industrialization is morally tolerable does not imply opposition to "any and all" technological progress, but the amount of space left is very small. I don't think they're taking an opinion on the value of better fishhooks.

Replies from: anonymousEA
comment by anonymousEA · 2021-12-30T11:03:52.510Z · EA(p) · GW(p)

It is morally tenable under some moral codes but not others. That's the point.

comment by Davidmanheim · 2021-12-29T08:11:46.532Z · EA(p) · GW(p)

Several times the case against the TUA was not actually argued


I think that they didn't try to oppose the TUA in the paper, or make the argument against it themselves. To quote: "We focus on the techno-utopian approach to existential risk for three reasons. First, it serves as an example of how moral values are embedded in the analysis of risks. Second, a critical perspective towards the techno-utopian approach allows us to trace how this meshing of moral values and scientific analysis in ERS can lead to conclusions, which, from a different perspective, look like they in fact increase catastrophic risk. Third, it is the original and by far most influential approach within the field."

I also think that they don't need to prove that others are wrong to show that the lack of diversity has harms - as you agreed.

If the wrong arguments being made would cause harm when believed, it is not only the right but the responsibility of funders to reduce their reach. 

That puts a huge and dictatorial responsibility on funders in ways that are exactly what the paper argued are inappropriate.

The responsibility of the researcher is to make their case as bulletproofed as possible, and designed to convince believers in the current paradigm... The "Effective" part of EA includes making the right arguments to convince the right people, rather than the argument that is cathartic to unleash.

That's not how at least some people who lead the movement think about it [EA · GW].

Replies from: Rubi, Guy Raveh
comment by Rubi · 2021-12-29T09:19:24.553Z · EA(p) · GW(p)

That puts a huge and dictatorial responsibility on funders in ways that are exactly what the paper argued are inappropriate.


If not the funders,  do you believe anyone should be responsible for ensuring harmful and wrong ideas are not widely circulated? I can certainly see the case that even wrong, harmful ideas should only be addressed by counterargument. However, I'm not saying that resources should be spent censoring wrong ideas harmful to EA, just that resources should not be spent actively promoting them. Funding is a privilege, consistently making bad arguments should eventually lead to the withdrawal of funding, and if on top of that those bad arguments are harmful to EA causes that should expedite the decision.

To be clear, that is absolutely not to say that publishing Democratizing Risk is/was justification for firing or cut funding, I am still very much talking abstractly.

Replies from: Guy Raveh
comment by Guy Raveh · 2021-12-29T09:40:41.539Z · EA(p) · GW(p)

If not the funders, do you believe anyone should be responsible for ensuring harmful and wrong ideas are not widely circulated?

I can't speak for David, but personally I think it's important that no one does this. Freedom of speach and freedom of research are important, and as long as someone doesn't call to intentionally harm or discriminate against another, it's important that we don't condition funding on agreement with the funders' views.

So,

Funding is a privilege, consistently making bad arguments should eventually lead to the withdrawal of funding, and if on top of that those bad arguments are harmful to EA causes that should expedite the decision.

I completely disagree with this.

Replies from: Larks, Rubi, willbradshaw
comment by Larks · 2021-12-29T14:00:44.535Z · EA(p) · GW(p)

Freedom of speach and freedom of research are important, and as long as someone doesn't call to intentionally harm or discriminate against another, it's important that we don't condition funding on agreement with the funders' views.

This seems a very strange view. Surely an animal rights grantmaking organisation can discriminate in favour of applicants who also care about animals? Surely a Christian grantmaking organisation can condition its funding on agreement with the Bible? Surely a civil rights grantmaking organisation can decide to only donate to people who agree about civil rights?

Replies from: shaybenmoshe, Guy Raveh
comment by ShayBenMoshe (shaybenmoshe) · 2021-12-30T21:03:53.123Z · EA(p) · GW(p)

I am not sure that there is actually a disagreement between you and Guy.
If I understand correctly, Guy says that in so far as the funder wants research to be conducted to deepen our understanding of a specific topic, the funders should not judge researchers based on their conclusions about the topic, but based on the quality and rigor of their work  in the field and their contributions to the relevant research community.
This does not seem to conflict what you said, as the focus is still on work on that specific topic.

comment by Guy Raveh · 2021-12-29T14:43:34.632Z · EA(p) · GW(p)

When you say "surely", what do you mean? It would certainly be legal and moral. Would a body of research generated only by people who agree with a specific assumption be better in terms of truth-seeking than that of researchers receiving unconditional funding? Of that I'm not sure.

And now suppose it's hard to measure whether a researcher conforms with the initial assumption, and in practice it is done by continual qualitative evaluation by the funder - is it now really only that initial assumption (e.g. animals deserve moral consideration) that's the condition for funding, or is it now a measure of how much the research conforms with the funder's specific conclusions from that assumption (e.g. that welfarism is good)? In this case I have a serious doubt about whether the research produces valuable results (cf. publication bias).

comment by Rubi · 2021-12-29T10:30:50.886Z · EA(p) · GW(p)

>it's important that we don't condition funding on agreement with the funders' views.

Surely we can condition funding on the quality of the researcher's past work though? Freedom of speech and freedom of research are both important, but taking a heterodox approach shouldn't guarantee a sinecure either. 

If you completely disagree that people consistently producing bad work should not be allocated scare funds, I'm not sure we can have a productive conversation.

Replies from: Guy Raveh
comment by Guy Raveh · 2021-12-29T10:47:44.394Z · EA(p) · GW(p)

If you completely disagree that people consistently producing bad work should not be allocated scare funds, I'm not sure we can have a productive conversation.

I theoretically agree, but I think it's hard to separate judgements about research quality from disagreement with its conclusions, or even unrelated opinions on the authors.

For example, I don't think the average research quality by non-tenured professors (who are supposedly judged by the merits of their work) is better than that of tenured professors.

Replies from: willbradshaw
comment by Will Bradshaw (willbradshaw) · 2021-12-29T20:11:18.683Z · EA(p) · GW(p)

I think this might just be unavoidably hard.

Like, it seems clear that funders shouldn't fund work they strongly & rationally believe is low-quality and/or harmful. It also seems clear that reasonably free and open dissent is crucial for the epistemic health of a research community. But how exactly funders should go about managing that conflict seems very unclear – especially in domains in which infohazards are real and serious (but in which all the standard problems with secrecy still absolutely apply).

I would be interested in concrete proposals for how to do this well, given all the considerations raised in this discussion, as well as case studies of how other communities have maintained epistemic diversity and their relevance to our use case. The main example of the latter would be academia, but that has its own set of severe pathologies – especially in the humanities, which seem to be the fields one would most naturally use as control groups for existential risk studies. I think working on concrete ideas and examples of how we can do better would be more useful than arguing round and round the quality-vs-epistemic-diversity circle.

Replies from: Davidmanheim
comment by Davidmanheim · 2021-12-31T14:19:41.369Z · EA(p) · GW(p)

The paper points out, among many other things, that more diversity in funders would  help accomplish most of these goals.

Replies from: willbradshaw
comment by Will Bradshaw (willbradshaw) · 2021-12-31T16:56:10.571Z · EA(p) · GW(p)

I agree that more diversity in funders would help with these problems. It would inevitably bring in some epistemic diversity, and make funding less dependent on maintaining a few specific interpersonal relationships. If funders were more numerous and less inclined to defer to each others' analyses, then someone persistently failing to get funding would represent somewhat stronger evidence that the quality of their work is actually low.

That said, I don't think this solves the problem. Distinguishing between bad (negatively-valuable) work and work you're biased against because you disagree with it will still be hard in many cases, and more & better thinking about concrete approaches to dealing with this still seems preferable to throwing broad intuitions at each other.

Plus, adding more funders to EA adds its own tricky balancing acts. If your funders are too aligned with each other, you're basically in the same position you were with only one funder. If they're too unaligned, you end up with some funders regularly funding work that other funders think is bad for the world, and the movement either splinters or becomes too broad and vague to be useful.

Replies from: Davidmanheim
comment by Davidmanheim · 2022-01-02T10:37:28.945Z · EA(p) · GW(p)

I agree, but you said that it would be good to have concrete ideas, and this is as concrete as I can imagine. And so on the meta level, I think it's a bit unreasonable to criticize the paper's concrete suggestion by saying that it's a hard problem, and their ideas would help, but they wouldn't be a panacea - clearly, if "fixes everything" is the bar for concrete ideas, we should all go home.

On the object level, I agree that there are challenges, but I think the dichotomy between too few / too many is overstated in two ways. First, I think that multiple funders who are aligned are still far less likely to end up not funding people because they upset someone specific. Even in overlapping social circles, this problem is mitigated. And second, I think that "too unaligned" is the current situation, where much funding goes to things that are approximately useless, some fraction goes to good but non-optimal things, and some goes to things that actively increases dangers. And having more semi-aligned money seems unlikely to lead to EA being to broad and vague than the status quo, where EA is at least 3 different movements (Global poverty/human happiness, Animal welfare, and longtermist existential risk,) and probably more, since if you look into those groups you could easily split them further. So I'd actually think that splitting things into distinct funders would be a great improvement, allowing for greater diversity and clarity about what is being funded and pursued.

Replies from: willbradshaw
comment by Will Bradshaw (willbradshaw) · 2022-01-02T11:24:23.190Z · EA(p) · GW(p)

Firstly, I wasn't responding to the OP, I was responding several levels into a conversation between two different commentators about the responsibility of individual funders to reward critical work. I think this is an important and difficult problem that doesn't go away even in a world with more diverse funding. You brought up "diversify funding" as a solution to that problem, and I responded that it is helpful but insufficient. I didn't say anything critical of the OP's proposal in my response. Unless you think the only reasonable response would be to stop talking about an issue as soon as one partial solution is raised, I don't understand your accusation of unreasonableness here at all.

Secondly, "have more diversity in funders" is not remotely a concrete proposal. It's a vague desideratum that could be effected in many different ways, many of which are probably bad. If that is "as concrete as [you] can imagine" then we are operating under different definitions of "concrete".

Replies from: Davidmanheim
comment by Davidmanheim · 2022-01-02T12:39:54.203Z · EA(p) · GW(p)

I don't really want the discussion to focus entirely on the meta-level, but the conversation went something like "we can condition funding on the quality of the researcher's past work " -> "I think it's hard to separate judgements about research quality from disagreement with its conclusions, or even unrelated opinions on the authors." -> "more diversity in funders would  help" (which was the original claim in the post!) -> "I don't think this solves the problem...more & better thinking about concrete approaches to dealing with this still seems preferable to throwing broad intuitions at each other." So I pointed out that more diversity, which was the post's suggestion, that I was referring back to, was as concrete a solution to the general issue  of " it's hard to separate judgements about research quality from disagreement with its conclusions" as I can imagine. But I don't think we're using different definitions at all. At this point, it seems clear you wanted something more concrete ("have Openphil split it's budget in the following way,") but it wouldn't have solved the general problem which was being discussed. Which was why I said I can't imagine a more concrete solution to the problem you were discussing.

In any case, I'm much more interested in the object level discussion of what would help, or not, and why.

comment by Will Bradshaw (willbradshaw) · 2021-12-29T21:46:04.619Z · EA(p) · GW(p)

I suspect I disagree with the users that are downvoting this comment. The considerations Guy raises in the first half of this comment are real and important, and the strong form of the opposing view (that anyone should be "responsible for ensuring harmful and wrong ideas are not widely circulated" through anything other than counterargument) is seriously problematic and, in my view, prone to lead to some pretty dark places.

A couple of commenters here have edged closer to this strong view than I'm comfortable with, and I'm happy to see pushback against that. If strong forms of this view are more prevalent among the community than I currently think, that would for me be an update in favour of the claims made in this post/paper.

That said, I do agree that "consistently making bad arguments should eventually lead to the withdrawal of funding", and that this problem is hard (see my other reply to Guy below).

Replies from: jsteinhardt
comment by jsteinhardt · 2021-12-29T22:17:42.442Z · EA(p) · GW(p)

I also agree with you. I would find it very problematic if anyone was trying to "ensure harmful and wrong ideas are not widely circulated". Ideas should be argued against, not suppressed.

Replies from: Davidmanheim
comment by Davidmanheim · 2021-12-31T14:23:22.683Z · EA(p) · GW(p)

Ideas should be argued against, not suppressed.


All ideas? Instructions for how to make contact poisons that aren't traceable? Methods for identifying vulnerabilities in nuclear weapons arsenals' command and control systems? Or, concretely and relevantly, ideas about which ways to make omnicidal bioweapons are likely to succeed. 

You  can tell me that making information more available is good, and I agree in almost all cases. But only almost all.

Replies from: jsteinhardt
comment by jsteinhardt · 2021-12-31T15:16:04.894Z · EA(p) · GW(p)

It seems clear that none of the content in the paper comes anywhere close to your examples. These are also more like "instructions" than "arguments", and Rubi was calling for suppressing arguments on the danger that they would be believed.

Replies from: Davidmanheim
comment by Davidmanheim · 2022-01-02T08:22:07.775Z · EA(p) · GW(p)

The claim was a general one - I certainly don't think that the paper was an infohazard, but the idea that this implies that there is no reason for funders to be careful about what they fund seems obviously wrong.

The original question was: "If not the funders, do you believe anyone should be responsible for ensuring harmful and wrong ideas are not widely circulated?"

And I think we need to be far more nuanced about the question than a binary response about all responsibility for funding.

comment by Guy Raveh · 2021-12-29T09:36:32.968Z · EA(p) · GW(p)

I mostly agree with your comments, but I think we need to stop referring to specific people as leaders of the movement. Will MacAskill's opinion is not really more important than anyone else's.

Replies from: Davidmanheim
comment by Davidmanheim · 2022-01-02T10:41:24.443Z · EA(p) · GW(p)

I disagree pragmatically and conceptually. First, people pay more attention to Will than to me about this, and that's good, since he's spent more time thinking about it, is smarter, and has more insight into what is happening. Second, in fact, movements have leaders, and egalitarianism is great for rights, but direct democracy a really bad solution to running anything which wants to get anything done. (Which seems to be a major thing I disagree with the authors of the article on.)

comment by CarlaZoeC · 2021-12-31T12:03:27.889Z · EA(p) · GW(p)

Saying thepaper is poorly argued is not particularly helpful or convincing. Could you highlight where and why Rubi? Breadth does not de-facto mean poorly argued.  If that was the case then most of the key texts in  x-risk would all be poorly argued.

Importantly, breadth was necessary to make a critique. There are simply many interrelated matters that are worth critical analysis. 

Several times the case against the TUA was not actually argued, merely asserted to exist along with one or two citations for which it is hard to evaluate if they represent a consensus.

As  David highlights in his response: we do not argue against the TUA, but point out the unanswered questions we observed. We do not argue against the TUA , but highlight assumptions that may be incorrect or smuggle in values. Interestingly, it's hard to find  how you believe the piece is both polemic but also not directly critiquing the TUA sufficiently.  Those two criticisms are in tension. 

If you check our references, you will see that we cite many published papers that treat common criticisms and open questions of the TUA (mostly by advancing the research). 

You spoke to 20+ reviewers, half of which were sought out to disagree with you, and not a single one could provide a case for differential technology?

Of course there are arguments for it, some of which are discussed in the forum. Our argument is that there is a lack of peer-review evidence to support differential technological development as a cornerstone of a policy approach to x-risk. Asking that we articulate and address every hypothetical counterargument is an incredibly high-bar, and one that is not applied to any other literature in the field (certainly not the key articles of the TUA we focus on). It would also make the paper far longer and broader. Again,  your points are in tension. 

I think the paper would have been better served by focusing on a single section, leaving the rest to future work. The style of assertions rather than argument and skipping over potential responses comes across as more polemical than evidence-seeking.

Then it wouldn't be a critique of the TUA. It would be a piece on differential tech development or hazard-centrism. 

This is remarkably similar to a critique we got from Seán Ó hÉigeartaigh. He both said that we should zoom in an focus on one section and said that we should zoom out and compare the TUA against all (?) potential alternatives. The recommendations are in tension and obnly share the commonality of making sure we write a paper that isn't a criqitue.

We see many remaining problems in x-risk. This paper is an attempt to list those issues and point out their weaknesses and areas for improvement. It should be read similar to a research agenda. 

The abstract and conclusion  clearly spell-out the areas where we take a clear position, such as the need for diversity in the field, taking lessons from  complex risk assessments in other areas, and democratisting policy recommendations. We do not articulate a position on degrowth, differential tech development etc.  We highlight that the existing evidence basis and arguments for them are weak.

We do not position ourselves in many cases, because we believe they require further detailed work and deliberation. In that sense I agree with you that we're covering too much - but only if the goal was to present clear positions on all these points. Since this was not the goal, I think it's fine to list many remaining questions and point out that indeed they still are questions that require answers. If you have strong opinion on any of the questions we mention, then go ahead write a paper that argues for one side, publish it, and let's get on with the science. 

Seán also called the paper polemic several times. (Per definition = strong verbal written attack, hostile, critical). This is not necessarily an insult (Orwell's Animal Farm is considered a polemic against totalitarianism), but I'm guessing it's not meant in that way. 

We are somewhat disappointed that one of the most upvoted responses on the forum to our piece is so vague and unhelpful. We would expect a community that has such high epistemic standards to reward comments that articulate clear, specific criticisms grounded in evidence and capable of being acted on.

Finally, the 'speaking abstractly' about funding. It is hard not see to see this as an insinuation that this we have consistently produced such poor scholarship that it would justify withdrawn funding. Again, this does not signal anything positive aboput the epistemics, or just sheer civility, of the community.

Replies from: Rubi, anonymousEA
comment by Rubi · 2022-01-02T22:00:33.681Z · EA(p) · GW(p)

Hi Carla,

Thanks for taking the time to engage with my reply. I'd like to engage with a few of the points you made.

First of all, my point prefaced with 'speaking abstractly' was genuinely that. I thought your paper was poorly argued, but certainly within acceptable limits that it should not result in withdrawn funding. On a sufficient timeframe, everybody will put out some duds, and your organizations certainly have a track record of producing excellent work. My point was about avoiding an overcorrection, where consistently low quality work is guaranteed some share of scarce funding merely out of fear that withdrawing such funding would be seen as censorship. It's a sign of healthy epistemics (in a dimension orthogonal to the criticisms of your post) for a community to be able to jump from a specific discussion to the general case, but I'm sorry you saw my abstraction as a personal attack.

You saw "we do not argue against the TUA, but point out the unanswered questions we observed. .. but highlight assumptions that may be incorrect or smuggle in values".  Pointing out unanswered questions and incorrect assumptions is how you argue against something! What makes your paper polemical is that you do not sufficiently check whether the questions  really are unanswered, or if the assumptions really are incorrect. There is no tension between calling your paper polemical and saying you do not sufficiently critique the TUA.  A more thorough critique that took counterarguments seriously and tried to address them would not be a polemic, as it would more clearly be driven by truth-seeking than hostility.

I was not "asking that we [you] articulate and address every hypothetical counterargument", I was asking that you address any, especially the most obvious ones. Don't just state "it is unclear why" they are believed to skip over a counterargument.

I am disappointed that you used my original post to further attack the epistemics of this community, and doubly so for claiming it failed to articulate clear, specific criticisms. The post was clear that the main failing I saw in your paper was a lack of  engagement with counterarguments, specifically the case for technological differentiation and the case for avoiding the disenfranchisement of future generations through a limited democracy. I do not believe that my criticism of the paper jumping around too much rather than engaging deeply on fewer issues was ambiguous either.  Ignoring these clear, specific criticisms to use the post as evidence of poor epistemics in the EA community makes me think you may be interpreting any disagreement as evidence for your point.

comment by anonymousEA · 2021-12-31T12:50:43.609Z · EA(p) · GW(p)

You spoke to 20+ reviewers, half of which were sought out to disagree with you, and not a single one could provide a case for differential technology?

Personally I read this as a straightforward accusation of dishonesty - something I would expect moderators to object to if the comment was critical (rather than supportive) of EA orthodoxy.

This is remarkably similar to a critique we got from Seán Ó hÉigeartaigh

I also found it suspicious that Rubi felt the need to comment using an anonymous throwaway account despite speaking in favor of established power structures.

To clarify, that's not to say Rubi is necessarily Seán Ó hÉigeartaigh. I have no idea and I don't know Seán.

However, this situation is very strange. Almost everyone on the EAforum uses their real name or a very thin pseudonym.

I am anonymous because vocally disagreeing with the status quo would probably destroy any prospects of getting hired or funded by EA orgs (see my heavily downvoted comment about my experiences somewhere at the bottom of this thread).

This clearly doesn't apply to Rubi, so what's up?

Replies from: Sean_o_h, aarongertler, Rubi, aarongertler
comment by Sean_o_h · 2021-12-31T14:37:39.748Z · EA(p) · GW(p)

Seán Ó hÉigeartaigh here. Since I have been named specifically, I would like to make it clear that when I write here, I do so under Sean_o_h, and have only ever done so. I am not Rubi, and I don't know who Rubi is. I ask that the moderators check IP addresses, and reach out to me for any information that can help confirm this.

I am on leave and have not read the rest of this discussion, or the current paper (which I imagine is greatly improved from the draft I saw), so I will not participate further in this discussion at this time.

comment by Aaron Gertler (aarongertler) · 2022-01-02T21:56:54.910Z · EA(p) · GW(p)

I am anonymous because vocally disagreeing with the status quo would probably destroy any prospects of getting hired or funded by EA orgs (see my heavily downvoted comment about my experiences somewhere at the bottom of this thread).

This clearly doesn't apply to Rubi, so what's up?

There are many reasons for people to use pseudonyms on the Forum, and we allow it with few restrictions [? · GW]. It's also fine to have multiple accounts.

To clarify, that's not to say Rubi is necessarily Seán Ó hÉigeartaigh. I have no idea and I don't know Seán.

However, this situation is very strange.

What exactly is "suspicious" or "strange" here? What is the thing you suspect, and is that thing against the Forum's rules? If not, do you think it should be?

Using vague insinuations instead of straightforwardly accusing someone doesn't change the result — which is that Seán understandably feels like he's been called out and needs to deny the "non-accusation". What were you trying to accomplish by talking about Seán here?

*****

You've now made several comments in this thread that were rude or insulting towards other users. That's not okay [? · GW], whether or not your position happens to align with any "status quo". (See these [EA(p) · GW(p)] examples [EA(p) · GW(p)] of comments being moderated for exactly this reason despite their position on the "popular" side of whatever thread they were a part of.)

If you want to object to someone's argument, state your objection. Explain why they're wrong, or what they've missed. This is almost always better than "I find this user suspicious" or "this user is acting in bad faith".

Several of your comments on this thread were good. I appreciated the links here [EA(p) · GW(p)] and some of the questions here [EA(p) · GW(p)]. But if you continue posting rude or insulting comments, the moderation team may take action.

comment by Rubi · 2022-01-02T22:26:18.670Z · EA(p) · GW(p)

To clear up my identity, I am not Seán and do not know him. I go by Rubi in real life, although it is a nickname rather than my given name. I did not mean for my account to be an anonymous throwaway, and I intend to keep on using this account on the EA Forum. I can understand how that would not be obvious as this was my first post, but that is coincidental. The original post generated a lot of controversy, which is why I saw it and decided to comment.

You spoke to 20+ reviewers, half of which were sought out to disagree with you, and not a single one could provide a case for differential technology?

I would have genuinely liked an answer to this. If none of the reviewers made the case, that is useful information about the selection of the reviewers. If some reviewers  did, but were ignored by the authors, then it reflects negatively on the authors not to address this and say that the case for differential technology is unclear.

comment by Aaron Gertler (aarongertler) · 2022-01-01T10:00:22.372Z · EA(p) · GW(p)

Personally I read this as a straightforward accusation of dishonesty - something I would expect moderators to object to if the comment was critical (rather than supportive) of EA orthodoxy.

As a moderator, I wouldn't object to this comment no matter who made it. I see it as a criticism of someone's work, not an accusation that the person was dishonest.

If someone wrote a paper critiquing the differential technology paradigm and spoke to lots of reviewers about it — including many who were known to be pro-DT — but didn't cite any pro-DT arguments, it would be fine for someone to ask: "Did you really not hear any cases for the DT paradigm?"

The question doesn't have to mean "you deliberately acted like there were no good pro-DT arguments and hoped we would believe you". That would frankly be a silly thing to say, since Carla and Luke are obviously familiar with these arguments and know that many of their readers would also be familiar with these arguments.

It could also imply:

  1. "You didn't ask the kinds of questions of reviewers that would lead them to spell out their cases for DT"
  2. "You didn't make room in your paper to discuss the pro-DT arguments you heard, and I think you should have"

Or, more straightforwardly, you could avoid assuming any particular implication and just read the question as a question: "Why were there no pro-DT arguments in your piece?"

I personally read implication (1), because of the statement "...made it seem like you did not do your research".

Carla's response read to me as a response to implication (2): "We chose not to discuss pro-DT arguments, because trying to give that kind of space to counterarguments for all of our points would be beyond the scope of our paper." Which is a fine, reasonable response.

I think Rubi's comment should have been more clear; it's more important for questioners to ask good questions than for respondents to correctly guess at what the questioner meant.

Overall, as a moderator, my response to this part of Rubi's comment is "this is unclear and could mean many things — perhaps one of these things is uncivil, but Carla answered a civil version of the question, and I'm not going to deliberately choose to interpret the question as the most uncivil version of itself."

*****

On the level of meta-moderation, these are the things I personally look for*, in rough priority order (other mods may differ):

  1. Comments that clearly insult another user
  2. Comments that include an information hazard or advocate for seriously harmful action
  3. Comments that interfere with good discourse in other ways

If you say "Rubi's comment is unclear, which means it's in category (3)" — you'd be right, but there are a lot of comments that are unclear, and it isn't realistic for moderators to respond to more than a tiny fraction of them, which means I focus on comments in the first two categories.

If you say "Rubi's comment could be taken to imply an insult, which means it's in category (1)" — I disagree, because I don't see any insulting read as "clear", and there are plenty of other ways to interpret the comment.

And of course, the specific position someone takes in a debate has no bearing on how we moderate, unless a particular position is in category (2) ("we should release a plague to kill everyone").

*I should also mention that I'm a human with limited human attention. So I'm not going to see every comment on every post. That's why every post comes with a "report" option, which people should really use if they think a comment should be moderated:

If you report a post or comment, one or more mods will definitely look at it and at least consider your argument for why it was reportable.

Something not being moderated doesn't imply that it's definitely fine — it could also mean the mods haven't read it, or that the mods didn't read it with "moderator vision" on. There have been times I read a comment in my off time, then saw the same comment reported later and said "oh, huh, this probably should be moderated".

Replies from: anonymousEA
comment by anonymousEA · 2022-01-01T10:51:19.824Z · EA(p) · GW(p)

Honestly, fair enough.

comment by Will Bradshaw (willbradshaw) · 2021-12-31T18:34:05.244Z · EA(p) · GW(p)

I share the surprise and dismay other commenters have expressed about the experience you report around drafting this preprint. While I disagree with many claims and framings in the paper, I agreed with others, and the paper as a whole certainly doesn't seem beyond the usual pale of academic dissent. I'm not sure what those who advised you not to publish were thinking.

In this comment, I'd like to turn attention to the recommendations you make at the end of this Forum post (rather than the preprint itself). Since these recommendations are somewhat more radical than those you make in the preprint, I think it would be worth discussing them specifically, particularly since in many cases it's not clear to me exactly what is being proposed.

Having written what follows, I realise it's quite repetitive, so if in responding you wanted to pick a few points that are particularly important to you and focus on those, I wouldn't blame you!


You claim that EA needs to...

diversify funding sources by breaking up big funding bodies 

Can you explain what this would mean in practice? Most of the big funding bodies are private foundations. How would we go about breaking these up? Who even is "we" in this instance?

[diversify funding sources] by reducing each orgs’ reliance on EA funding and tech billionaire funding

What sorts of funding sources do you think EA orgs should be seeking, other than EA orgs and individual philanthropists (noting that EA-adjacent academic researchers already have access to the government research funding apparatus)?

produce academically credible work

Speaking as a researcher who has spent a lot of time in academia, I think how much I care about work being "academically credible" depends a lot on the field. In many cases, I think post-publication review in places like the Forum is more robust and useful than pre-publication academic review.

Many academic fields (especially in the humanities) seem to have quite bad epistemic and political cultures, and even those that don't often have very particular ideas of what sorts of problems & approaches are suitable for peer-reviewed articles (e.g. requiring that work be "interesting" or "novel" in particular ways). And the current peer-review system is well-known to be painfully inadequate in many ways.

I don't want to overstate this – I think there are many cases where the academic publication route is a good option, for many reasons. But I've read a lot of pretty bad academic papers in my time, sometimes in prestigious journals, and it's not all that rare for a Forum report to significantly exceed the quality of the academic literature. I don't think academic credibility per se is something we should be aiming for for epistemic reasons. But perhaps you had other benefits in mind?

set up whistle-blower protection

Can you elaborate on what sorts of concrete systems you think would be useful here? Whistle-blower protection is usually intra-organisational – is this what you have in mind here, or are you imagining something more pan-community?

actively fund critical work

This sounds great, but I think is probably quite hard to implement [EA(p) · GW(p)] in practice in a way that seems appealing. A lot depends on the details. Can you elaborate on what sorts of concrete proposals you would endorse here?

For example, do you think OpenPhil should deliberately fund "red-team" work they disagree with, solely for the sake of community epistemics? If so, how should they go about doing that?

allow for bottom-up control over how funding is distributed

I think having ways to aggregate small-donor preferences regarding EA grantees is valuable. I don't think it should replace large philanthropic donors with concentrated expertise. But I think I'd have a better opinion if I had a better idea of what you were advocating.

diversify academic fields represented in EA

This isn't something you can just change by fiat. You could modify the core messages of EA to deliberately appeal to a wider variety of backgrounds, but that seems like it has a lot of important downsides. Again, I think I would need a better idea of what exactly you have in mind as interventions to really evaluate this.

, make the leaders' forum and funding decisions transparent

These seem like two different cases. I'm generally pro public reporting of grants, but I don't really know what you have in mind for the leaders' forum (or other similar meetings).

stop glorifying individual thought-leaders

I'm guessing for more detail on this we should refer to the section on intelligence from your earlier post [EA · GW]? I'm torn between sympathy and scepticism here, and don't feel like I have much to add, so let's move on to...

stop classifying everything as info hazards

OK, but how do you handle actual serious information hazards?

I'm on record in various places (e.g. here [EA · GW]) saying that I think secrecy has lots of really serious downsides, and I still think these downsides are frequently underrated by many EAs. I certainly think that there is substantial progress still to be made in improving how we think about and deal with these problems. But that doesn't make the core problem go away – sometimes information really is hazardous, in a fairly direct (though rarely straightforward) way.

Replies from: willbradshaw
comment by Will Bradshaw (willbradshaw) · 2022-01-29T09:58:47.899Z · EA(p) · GW(p)

While I appreciate that we're all busy people with many other things to do than reply to Forum comments, I do think I would need clarification (and per-item argumentation) of the kind I outline above in order to take a long list of sweeping changes like this seriously, or to support attempts at their implementation.

Especially given the claim that "EA needs to make such structural adjustments in order to stay on the right side of history".

comment by CarlaZoeC · 2021-12-28T22:53:10.979Z · EA(p) · GW(p)

Here's a Q&A which answers some of the questions by  reviewers of early drafts. (I planned to post it quickly, but your comments came in so soon! Some of the comments hopefully find a reply here)

"Do you not think we should work on x-risk?"

  • Of course we should work on x-risk

 

"Do you think the authors you critique have prevented alternative frameworks from being applied to Xrisk?"

  • No. It’s not really them we’re criticising if at all. Everyone should be allowed to put out their ideas. 
  • But not everyone currently gets to do that. We really should have a discussion about what authors and what ideas get funding.

 

"Do you hate longtermism?"

  • No. We are both longtermists (probs just not the techno utopian kind).

 

"You’re being unfair to Nick Bostrom. In the vulnerable world hypothesis, Bostrom merely speculates that such a surveillance system may, in a hypothetical world in which VWH is true, be the only option"

  • It doesn't matter whether Nick Bostrom speculates or wants to implement surveillance globally. In respect to what we talk about (justification of extreme actions) what matters is how readers perceive his work and who the readers are. 
  • There’s some hedging in the article but…
  • He published in a policy journal, with an opening ‘policy implication’ box 
  • He published an outreach article about in Aeon, which also ends with the sentence: ”If you find yourself in a position to influence the macroparameters of preventive policing or global governance, you should consider that fundamental changes in those domains might be the only way to stabilise our civilisation against emerging technological vulnerabilities.”
  • In public facing interviews such as with Sam Harris and on TED, the idea of ‘turnkey totalitarianism’ was made the centrepiece. This was not framed as one hypothetical, possible, future solution for a philosophical thought experiment. 
  • The VWH was also published as a German book (why I don’t know…)
  • Seriously if we’re not allowed to criticise those choices, what are we allowed to criticise?

 

"Do you think longtermism is by nature techno-utopian?"

  • In theory, no. Intergenerational justice is an old idea. Clearly there are versions of longtermism that do not have to rely on the current set of assumptions. Longtermist thinking is a good idea. 
  • In practice, most longtermists tend to operate firmly under the TUA. This is seen in the visions they present on the future, the value placed on continued technological and economic growth etc.


"Who is your target audience?"

  • Junior researchers who want to do something new and exciting in x-risk and 
  • External academics who have thus far felt repelled by the TUA framing of the x-risk and might want to come into the field and bring in their own perspective 
  • Anyone who really loves the TUA and wants to expose themselves to a different view 
  • Anyone who doubted the existing approaches but could not quite put a finger on why
  • Our audience is not: philosophers working on x-risk who are thinking about these issues day and night and who are well aware of some of the problems we raise.

 

"Do you think we should abandon the TUA entirely?"

  • No. Especially those who feel personally compelled to work on the TUA or who have built an expertise in it, are obviously free to work on it. 
  • We just shouldn’t pressure everyone else to do that too.

 

"Why didn’t you cite paper X?"

  • Sorry, we probably missed it. We’re covering an enormous amount in this paper. 

 

"Why didn’t you cite blogpost X? "

  • We constrained our lit search to papers that have the ambition to get through academic peer review. We also don’t read as many blog posts. That said, we appreciate that some people have raised similar concerns as we do on Twitter and on Blogs. We don’t think this renders a more formal listing of the concerns useless. 

 

"You critique we need to solve problem X but Y has already written a paper on X!"

  • Great! Then we support Y having written that paper! We invite more people to do what Y did. Do you think this was enough and the problem is now solved? Do you think there are no valuable alternative papers to be written so that it’s ridiculous to have said we need more work on X?

 

"Why is your language so harsh? Or: Your language should have been more harsh!"

  • Believe it or not we got both perspectives - for some people the paper is beating around the bush too much, for others it feels like a hostile attack. We could not please them all. 
  • Maybe ask youself what makes you as a reader fall into one of these categories?
Replies from: Davidmanheim, Raven
comment by Davidmanheim · 2021-12-29T07:57:25.415Z · EA(p) · GW(p)

Just noting that I strongly endorse both this format for responding to questions, and the specific responses.

comment by Raven · 2021-12-29T18:46:00.510Z · EA(p) · GW(p)

With regard to harshness, I think part of the reason you get different responses is because you're writing in the genre of the academic paper. Since authors have to write in a particular formal style, it's ambiguous whether they intend a value judgment. Often authors do want readers to come away with a particular view, so it's not crazy to read their judgments into the text, but different readers will draw different conclusions about what  you want them to feel or believe.

For example:

Under the TUA, an existential risk is understood as one with the potential to cause human
extinction directly or lead us to fail to reach our future potential, expected value, or
technological maturity. This means that what is classified as a prioritised “risk” depends on a
threat model that involves considerable speculation about the mechanisms which can result in the death of all humans, their respective likelihoods, and a speculative and morally loaded
assessment of what might constitute our inability to reach our potential.

[...]

A risk perception that depends so strongly on speculation and yet-to-be-verified assumptions will inevitably (to varying degrees) be an expression of researchers’ personal preferences, biases, and imagination. If collective resources (such as research funding and public attention) are to be allocated to the highest priority risk, then ERS should attempt to find a more evidence-based, replicable prioritisation procedure.

As with many points in your paper, this is literally true, and I appreciate you raising awareness of it! In a different context, I might read this as basically a value-neutral call to arms. Given the context, it's easy to read into it some amount of value judgment around longtermism and longtermists. 

comment by John G. Halstead (Halstead) · 2021-12-28T17:29:53.482Z · EA(p) · GW(p)

The discussion of Bostrom's Vulnerable World Hypothesis seems very uncharitable. Bostrom argues that on the assumption that technological development makes the devastation of civilisation extremely likely, extreme policing and surveillance would be one of the few ways out. You give the impression that he is arguing for this now in our world ("There is little evidence that the push for more intrusive and draconian policies to stop existential risk is either necessary or effective"). But this is obviously not what he is proposing - the vulnerable world hypothesis is put forward as a hypothesis and he says he is not sure whether it is true. 

Moreover, in the paper, Bostrom discusses at length the obvious risks associated with increasing surveillance and policing:

"It goes without saying that a mechanism that enables unprecedentedly intense forms of surveillance, or a global governance institution capable of imposing its will on any nation, could also have bad consequences. Improved capabilities for social control could help despotic regimes protect themselves from rebellion. Ubiquitous surveillance could enable a hegemonic ideology or an intolerant majority view to impose itself on all aspects of life, preventing individuals with deviant lifestyles or unpopular beliefs from finding refuge in anonymity. And if people believe that everything they say and do is, effectively, ‘on the record’, they might become more guarded and blandly conventional, sticking closely to a standard script of politically correct attitudes and behaviours rather than daring to say or do anything provocative that would risk making them the target of an outrage mob or putting an indelible disqualifying mark on their resume. Global governance, for its part, could reduce beneficial forms of inter-state competition and diversity, creating a world order with single point of failure: if a world government ever gets captured by a sufficiently pernicious ideology or special interest group, it could be game over for political progress, since the incumbent regime might never allow experiments with alternatives that could reveal that there is a better way. Also, being even further removed from individuals and culturally cohesive ‘peoples’ than are typical state governments, such an institution might by some be perceived as less legitimate, and it may be more susceptible to agency problems such as bureaucratic sclerosis or political drift away from the public interest."

Replies from: evelynciara, CarlaZoeC
comment by BrownHairedEevee (evelynciara) · 2021-12-29T06:26:40.220Z · EA(p) · GW(p)

That was my reading of VWH too - as a pro tanto argument for extreme surveillance and centralized global governance, provided that the VWH is true. However, many of its proponents seem to believe that the VWH is likely to be true. I do agree that the authors ought to have interpreted the paper more carefully, though.

comment by CarlaZoeC · 2021-12-28T22:59:54.561Z · EA(p) · GW(p)
  • It doesn't matter whether Nick Bostrom speculates or wants to implement surveillance globally. In respect to what we talk about (justification of extreme actions) what matters is how readers perceive his work and who the readers are.
  • There’s some hedging in the article but…
  • He published in a policy journal, with an opening ‘policy implication’ box
  • He published an outreach article about in Aeon, which also ends with the sentence: ”If you find yourself in a position to influence the macroparameters of preventive policing or global governance, you should consider that fundamental changes in those domains might be the only way to stabilise our civilisation against emerging technological vulnerabilities.”
  • In public facing interviews such as with Sam Harris and on TED, the idea of ‘turnkey totalitarianism’ was made the centrepiece. This was not framed as one hypothetical, possible, future solution for a philosophical thought experiment.
  • The VWH was also published as a German book (why I don’t know…)
Replies from: Halstead, jtm
comment by John G. Halstead (Halstead) · 2021-12-28T23:16:03.200Z · EA(p) · GW(p)

It still seems like you have mischaracterised his view. You say "Take for example Bostrom’s “Vulnerable World Hypothesis”17, which argues for the need for extreme, ubiquitous surveillance and policing systems to mitigate existential threats, and which would run the risk of being co-opted by an authoritarian state." This is misleading imo. Wouldn't it have been better to note the clearly important hedging and nuance and then say that he is insufficiently cognisant of the risks of his solutions (which he discusses at length)?

comment by jtm · 2021-12-29T20:44:32.951Z · EA(p) · GW(p)

Thanks for laying this out so clearly. One frustrating aspect of having a community comprised of so many analytic philosophy students (myself included!) is a common insistence on interpreting statements, including highly troubling ones, exactly as they may have been intended by the author, at the exclusion of anything further that readers might add, such as historical context or ways that the statement could be misunderstood or exploited for ill purposes. Another example of this is the discussion around [EA(p) · GW(p)] Beckstead's (in my opinion, deeply objectionable) quote regarding the (hypothetical, ceteris-paribus, etc., to be clear) relative value of saving rich versus poor lives.[1] 

I do understand the value of hypothetical inquiry as part analytic philosophy and appreciate its contributions to the study of morality and decision-making. However, for a community that is so intensely engaged in affecting the real world, it often feels like a frustrating motte-and-bailey, where the bailey is the efforts to influence policy and philanthropy on the direct basis of philosophical writings, and the motte is the insistence that those writings are merely hypothetical.

In my opinion, it's insufficient to note that an author intends for some claim to be "hypothetical" or "abstract" or "pro tanto" or "all other things equal", if the claim is likely to be received or applied in the way it was literally written. E.g., proposals for ubiquitous surveillance cannot be dismissed as merely hypothetical, if there's a appreciable chance that some readers come away as even slightly more supportive of, or open to, the idea of ubiquitous surveillance in practice.

To be clear, I'm not saying that the community shouldn't  conduct or rely on the kind of hypothetical-driven philosophy exemplified in Bostrom's VWH or in Beckstead's dissertation. But I do think it's important, then, to either i) make it clear that a piece of writing is intended as analytic philosophy that generally should be applied with extreme care to the real world or ii) to do a much better job at incorporating historical context and taking potential misinterpretations and misapplications extremely seriously. 

For VWH, Option i) could look like replacing the journal with one for analytic philosophy and replacing the Policy Implications box with a note clarifying that this is work of philosophy, not policy analysis. Option ii) could involve an even more extensive discussion of downside risks – I genuinely don't think that the 6 sentences quoted above on how "unprecedentedly intense forms of surveillance, or a global governance institution capable of imposing its will on any nation, could also have bad consequences" constitutes anywhere near the appropriate effort to manage downside risks associated with a policy article on ubiquitous global surveillance. Specifically, that effort would require engaging with the real-world history of totalitarian surveillance and its consequences; outlining in more detail how the surveillance system could go wrong or even itself pose an existential risk; and warning in much more unequivocal terms about the danger of misunderstanding or misapplying this proposal.

For Beckstead's dissertation quote, Option i) is, to be fair, already somewhat in play, given that the quote is from a dissertation in analytic philosophy and there's a good amount of ceteris-paribus-hypothetical-pro-tanto-etc. caveating, though the specific passage could maybe do with a bit more. Option ii) could involve embedding the quote in the context of both historical and modern philanthropy, particularly through a postcolonial lens; also discussing hypothetical counterexamples of when the opposite conclusion might hold; and cautioning in the strongest possible terms against specific misunderstandings or misapplications of the principle. Nowadays – though arguably less so in 2013, when it was written – it could also involve a discussion of how the principle under discussion in the paragraph relates to the fact the reallocation of funds that could plausibly have been used for global health towards improving the comfort of affluent individuals in the Global North, such as myself. I understand that this is a philosophy dissertation, so the above might not be easy to include – but then I think we have a difficult challenge of relying a lot on ahistorical, non-empirical philosophy as guidance for a very real, very practical movement.

The bottom line is that certain seminal texts in effective altruism should either be treated as works of analytic philosophy with its afforded bold, and even troubling, speculation, or as policy-guiding with its requirement for extreme caution in the presence of downside risks; they can't be both at once.

___

[1] For context, here's the quote in question, from Beckstead's dissertation, On the overwhelming importance of shaping the far future:

"Saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive. By ordinary standards—at least by ordinary enlightened humanitarian standards—saving and improving lives in rich countries is about equally as important as saving and improving lives in poor countries, provided lives are improved by roughly comparable amounts. But it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal."

comment by Larks · 2021-12-28T19:12:16.827Z · EA(p) · GW(p)

The linked article seems to overstate the extent to which EAs support totalitarian policies. While it is true that EAs are generally left-wing and have more frequently proposed increases in the size & scope of government than reductions, Bostrom did commission an entire chapter of his book on the dangers of a global totalitarian government from one of the world's leading libertarian/anarchists, and longtermists have also often been supportive of things that tend to reduce central control, like charter cities, cryptocurrency and decentralised pandemic control.

Indeed, I find it hard to square the article's support for ending technological development with its opposition to global governance. Given the social, economic and military advantages that technological advancement brings, it seems hard to believe that the US, China, Russia etc. would all forgo scientific development, absent global coordination/governance. It is precisely people's skepticism about global government that makes them treat AI progress as inevitable, and hence seek other solutions. 

Replies from: anonea2021
comment by anonea2021 · 2021-12-28T19:59:46.781Z · EA(p) · GW(p)

I think it's important to distinguish "anarcho"-capitalist thought (which still needs a state to enforce private property and capital rights and generally doesn't acknowledge the problems of monopolies, existing power imbalances etc.) and actual anarchist/anti-totalitarian policies.

 

All the things you mentioned except the last

(...) reduce central control, like charter cities, cryptocurrency and decentralised pandemic control

decentralise control from a democratic state to a moneyed elite, not to a more democratic state, confederation, anarchist commune or whatever.

Replies from: willbradshaw
comment by Will Bradshaw (willbradshaw) · 2021-12-29T07:55:25.647Z · EA(p) · GW(p)

I really dislike it when left-anarchist-leaning folks put scare quotes around "anarcho" in anarcho-capitalist. In my experience it's  a strong indicator that someone isn't arguing in good faith.

I'm not an ancap (or a left-anarchist), but David Friedman and his ilk is very clearly trying to formulate a way for a capitalist society to exist without a state. You might think their plans are unfeasible or undesirable (and I do), but that doesn't mean they're secretly not anarchists.

Replies from: anonea2021
comment by anonea2021 · 2021-12-29T15:05:47.705Z · EA(p) · GW(p)

From the perspective of every other lineage of anarchists, private property is one of the things that enforces injust hierarchies. Using that label is like calling yourself "vegano-carnivore" because you want to reduce the suffering of eating animals as much as possible while still eating them. Even if you can come up with a justification on it by presenting clearly realizable ways to implement this (e.g. lab grown meat), it is adopting a label from a community that does not want them to do so. Indeed, there was already a ready-made label "laisez-faire", but that one has sufficiently negative historical associations that I guess it is to be avoided.

Regarding Friedman,  I would challenge the statement that he provides ways to organize it without a state, given that he romantizices medieval iceland and the western frontier and I am highly skeptical that the law enforcement/military aspect required for enforcing capital right would not lead to the tyranny of the robber barons in their company towns again, but I would have to revisit it for detailed rebuttal.

Replies from: willbradshaw
comment by Will Bradshaw (willbradshaw) · 2021-12-29T19:40:35.788Z · EA(p) · GW(p)

I don't think most people outside left-anarchism would equate "state" with the existence of any unjust hierarchies. Indeed, defining a state in that way seems to be begging the question with regard to anarchy's desirability and feasibility.

Whether or not Friedman provides ways to organise society without a state, he is clearly trying to do so, at least by any definition of "state" that a non-(left-anarchist) would recognise (e.g. an entity with a monopoly on legitimate violence).

Replies from: Michael Große
comment by Michael Große · 2022-01-09T23:00:05.789Z · EA(p) · GW(p)

I don't think most people outside left-anarchism would equate "state" with the existence of any unjust hierarchies. Indeed, defining a state in that way seems to be begging the question with regard to anarchy's desirability and feasibility.

 

I don't see where anonea2021 has made that claim. Did you mean to write "property" instead of "state" in this paragraph? (genuine question)
Either way, I'm having trouble following what you want to say with this paragraph.

What anonea2021 states:

From the perspective of every other lineage of anarchists, private property is one of the things that enforces injust hierarchies.

I can confirm that this indeed the view of every other lineage of anarchists that I'm aware of. 
The anarchist's goal is to minimize unjust hierarchies. And given that private property (esp. of the means of production) is seen as one of the main causes of unjust hierarchies in today's world, it is plausible that a movement that tries create a society which structures itself completely along the lines of private property, is seen as utterly missing the point of anarchism. Thus "anarcho-"capitalism.

Replies from: willbradshaw
comment by Will Bradshaw (willbradshaw) · 2022-01-11T09:06:01.681Z · EA(p) · GW(p)

I don't see where anonea2021 has made that claim. Did you mean to write "property" instead of "state" in this paragraph? (genuine question) Either way, I'm having trouble following what you want to say with this paragraph.

Yes, it seems like there's some crossed wires here.

I claimed that ancaps are "clearly trying to formulate a way for a capitalist society to exist without a state". The intended implicature was that since anarchy = the absence of a state (according to common understanding, the dictionary definition, and etymology) it was therefore proper to call them anarchists.

anonea2021 responded with "From the perspective of every other lineage of anarchists, private property is one of the things that enforces injust hierarchies." I was confused about this, since it didn't seem like a direct response to my claims. I wasn't sure whether to read it as (a) a claim that unjust hierarchies = a state (which seemed like a bad definition of "state"), or (b) a claim that anarchism wasn't actually about the absence of a state but instead about abolishing unjust hierarchies in general (which seemed like a bad, question-begging definition of "anarchism", given that ~everyone wants to minimise unjust hierarchies).

I tried to respond to the superposition of these two interpretations, which probably led to my phrasing being more confusing than it needed to be. 

I can confirm that this indeed the view of every other lineage of anarchists that I'm aware of. The anarchist's goal is to minimize unjust hierarchies. And given that private property (esp. of the means of production) is seen as one of the main causes of unjust hierarchies in today's world, it is plausible that a movement that tries create a society which structures itself completely along the lines of private property, is seen as utterly missing the point of anarchism. Thus "anarcho-"capitalism.

As before, this begs the question. Everyone wants to minimise unjust hierarchies, so that's not a useful description of anarchism. People who disagree about which hierarchies are unjust, what interventions are effective for reducing them, and what the costs of those interventions are, will end up advocating for radically different systems of government. Some of those will end up advocating for a society without a state, and it's useful to refer to that subset of positions as "anarchist" even if they are very different from each other.

Anarcho-capitalism is really quite different from other forms of capitalist social organisation, and its distinctive feature is the absence of a coercive state. "Anarcho-capitalism" is thus a completely appropriate name for it – indeed, it's hard to see what other name would fit better. Also, it's what they call themselves, and we should heavily lean towards using people's own self-labels.

It's fine to just say "anarcho-capitalism is radically different from other forms of anarchism, and anarchists on the left will typically deeply disagree with its tenets". That much is clear. Putting scare-quotes around "anarcho" is bad for the discourse in multiple ways.

comment by AdamGleave · 2021-12-30T00:48:20.978Z · EA(p) · GW(p)

First of all, I'm sorry to hear you found the paper so emotionally draining. Having rigorous debate on foundational issues in EA is clearly of the utmost importance. For what it's worth when I'm making grant recommendations I'd view criticizing orthodoxy (in EA or other fields) as a strong positive so long as it's well argued. While I do not wholly agree with your paper, it's clearly an important contribution, and has made me question a few implicit assumptions I was carrying around.

The most important updates I got from the paper:

  1. Put less weight on technological determinism. In particular, defining existential risk in terms of a society reaching "technological maturity" without falling prey to some catastrophe frames technological development as being largely inevitable. But I'd argue even under the "techno-utopian" view, many technological developments are not needed for "technological maturity", or at least not for a very long time. While I still tend to view development of things like advanced AI systems as hard to stop (lots of economic pressures, geographically dispersed R&D, no expert consensus on whether it's good to slow down/accelerate), I'd certainly like to see more research into how we can affect the development of new technologies, beyond just differential technological advancement.
  2. "Existential risk" is ambiguous, so hard to study formally, we might want to replace it with more precise terms like "extinction risk" that are down-stream of some visions of existential risk. I'm not sure how decision relevant this ends up being, I think disagreement about how the world will unfold explains more of the disagreement on x-risk probabilities than definitions of x-risk, but it does seem worth trying to pin down more precisely.
  3. "Direct" vs "indirect" x-risk is a crude categorization, as most hazards lead to risks via a variety of pathways. Taking AI: there are some very "direct" risks such as a singleton AI developing some superweapon, but also some more "indirect" risks such as an economy of automated systems gradually losing alignment with collective humanity.

My main critiques:

  1. I expect a fairly broad range of worldviews end up with similar conclusions to the "techno-utopian approach" (TUA). The key beliefs seem to be that: (a) substantially more value is present in the future than exists today; (b) we have a moral obligation to safeguard that. The TUA is a very strong version of this, where there is many orders of magnitude more value in the future (transhumanism, total utilitarianism) and moral obligation is equal in the future and present (strong longtermism). But a non-transhumanist who wants 8 billion non-modified, biological humans to continue happily living on Earth for the next 100,000 years and values future generations at 1% of current generations would for many practical purposes make the same decisions.
  2. I frequently found myself unsure if there was actually a concrete disagreement between your views and those in the x-risk community, including those you criticize, beyond a choice of framing and emphasis. I understand it can be hard to nail down a disagreement, but this did leave me a little unsatisfied. For example, I'm still unsure what it really means to "democratise research and decision-making in existential risk" (page 26). I think almost all x-risk researchers would welcome more researchers from complementary academic disciplines or philosophical bents, and conversely I expect you would not suggest that random citizen juries should start actively participating in research. One concrete question I had is what axes you'd be most excited for the x-risk research field to become more diverse on at the margin: academic discipline, age, country, ethnicity, gender, religion, philosophical views, ...?
  3. Related to the above, it frequently felt like the paper was arguing against uncharitable versions of someone else's views -- VWH is an example others have brought up. On reflection, I think there is value to this, as many people may be holding those versions of the person's views even if the individual themselves had a more nuanced perspective. But it did often make me react "but I subscribe to <view X> and don't believe <supposed consequence Y>"! One angle you could consider taking in future work is to start by explaining your most core disagreements with a particular view, and then go on to elaborate on problems with commonly held adjacent positions.

I'd also suggest that strong longtermism is a meaningfully different assumption to e.g. transhumanism and total utilitarianism. In particular, the case for existential or extinction risk research seems many orders of magnitude weaker under a near-termist than strong longtermist worldview. Provided you think strong longtermism is at least credible, it seems reasonable to assume it when doing x-risk research, even though you should discount the impact of such interventions based on your credence in longtermism when making a final decision on where to allocate resources. If there is a risk that seems very likely to occur (e.g. AI, bio) such that it is plausible under both near-termist and longtermist grounds then perhaps it makes sense to drop this assumption, but even then I suspect it is often easier to just run two different analyses, given the different outcome metrics of concerns (e.g. % x-risk averted vs QALYs saved).

Replies from: anonymousEA
comment by anonymousEA · 2021-12-30T02:20:33.887Z · EA(p) · GW(p)

"Direct" vs "indirect" x-risk is a crude categorization, as most risks will cause hazards via a variety of pathways.

I think you switched the two by accident

Otherwise an excellent comment even if I disagree with most of it, have an updoot

Replies from: AdamGleave
comment by AdamGleave · 2021-12-30T20:20:10.699Z · EA(p) · GW(p)

Thanks, fixed.

comment by Jaime Sevilla (Jsevillamol) · 2021-12-28T16:49:46.147Z · EA(p) · GW(p)

Thank you for writing and sharing. I think I agree with most of the core claims in the paper, even if I disagree with the framing and some minor details.

One thing must be said: I am sorry you seem to have had a bad experience while writing criticism of the field. I agree that this is worrisome, and makes me more skeptical of the apparent matters of consensus in the community. I do think in this community we can and must do better to vet our own views.

Some highlights:

I am big on the proposal to have more scientific inquiry . Most of the work today on existential risk is married to a particular school of Ethics, and I agree it need not be. 

On the science side I would be enthusiastic about seeing more work on eg models of catastrophic biorisk infection, macroeconomic analysis on ways artificial intelligence  might affect society and expansions of IPCC models that include permafrost methane release feedback loops.

On the humanities side I would want to see for example more work on historical, psychological and anthropological evidence for long term effects and successes and failures in managing global problems like the hole in the ozone layer. I would also like to see surveys of how existential risks are perceived, and what the citizens of the world believe we ought to do about them.

 

An easier initial step here may be to specify dystopias that most value theories would say we should avoid, rather 

I like this practical approach. I think this is probably enough to pin down bad outcomes we can use to guide policy, and I would be enthusiastic about seeing more perspectives in the field.

 

Furthermore, EV and decision theories more widely are affected by Pascal’s Mugging as well as what has been called fanaticism. We know of no pragmatic and consistent response to those challenges yet.

I do not either. One key thing that is brought forward to the paper and is one of my takeaway lessons is that the study of existential risks and longtermism ought to focus on issues which are "ethically robust" - that is, that are plausible priorities under many worldviews.

Now, I do believe that most of the current focus areas of these fields (including AI risk, biorisk, nuclear war, climate change and others) pass this litmus test. But it is something to be mindful of when engaging with work in the area. For example, I believe that arguments in favour of destroying the Earth to prevent S-risks would currently fail this test.

 

Do those who study the future of humanity have good grounds to ignore the visions, desires, and values of the very people whose future they are trying to protect? Choosing which risks to take must be a democratic endeavour.

I do broadly agree. We need to ensure that longtermist policy making, with its consequences, are properly explained to the electorate, and that they are empowered to collectively decide which risks to take and which ones to ignore.

EA needs to diversify funding sources by breaking up big funding bodies and by reducing each orgs’ reliance on EA funding and tech billionaire funding, it needs to produce academically credible work, set up whistle-blower protection, actively fund critical work, allow for bottom-up control over how funding is distributed, diversify academic fields represented in EA, make the leaders forum and funding decisions transparent, stop glorifying individual thought-leaders, stop classifying everything as info hazards…amongst other structural changes. 

These suggestions stands to me as reasonable. I've bolded the ones that currently seem most actionable to me. 


There are some issues with this paper which made me a bit uneasy.

I am going to highlight some examples.

 

First, the focus on "techno-utopian approach", space expansionism, total utilitarism, etc seems undue. I do agree that most authors nowadays seem to endorse versions of this cluster of beliefs. This is no doubt a symptom of lack of diversity. Yet for the most part I think that most modern work on longtermism does not rely on these assumptions. 

You have identified as a key example of a consequence of the TUA that proposals to stop AI development are scarce, while eg there are some proposals to stop biorisks. For what is worth, I have regularly seen proposals in the community to stop and regulate AI development.

I think making a case against this framing is something that would take a lot of energy to ensure I was correctly representing the authors, so I am afraid I will drop this thread. I think Avital's response to Torres [EA · GW] captures a big part of what I would bring up.

 

Second, it hurts my soul that you make a (good) case against unduly giving importance to info hazards, yet criticize Bostrom for talking about pro tanto reasons in favour of totalitarism (which to be clear, I am, all things considered, against. But that should not prevent us from discussing pro tanto reasons).

 

Third, I think the paper correctly argues that some foundational terms in the field ("existential risk", "existential hazard", "extinction risk" etc) are insufficiently defined. Indeed, it was in fact hard to follow this very article because of the confused vocabulary the field has come to use. However, I am unconvinced that the pinning down a terminology is critical to solve the key problems in the field. I expect others will disagree with this assessment.


Again, thank you for writing the paper and for engaging. Making it easier and more visible the criticisms of EA is vital for the health of the community. I was pleased to see that your previous criticism [EA · GW] was well received, at least in terms of forum upvotes, and I hope this piece will be similarly impactful and change trends in Effective Altruism for the better.

My biggest takeaways from this paper:

  1. We need to work towards an "ethically robust" field of existential risk and longtermism. That is, one that focuses on avoiding dystopias according to most commonly held worldviews.
  2. The current state of affairs is that we are nowhere having enough cognitive diversity in the field to cover all mainstream perspectives. This is exacerbated by the lack of feedback loops  between the world at large and scholars working on existential risk.
Replies from: jchen1
comment by jchen1 · 2022-02-12T03:50:22.212Z · EA(p) · GW(p)

"I have regularly seen proposals in the community to stop and regulate AI development" - Are there any public ones you can signpost to or are these all private proposals?

comment by John G. Halstead (Halstead) · 2021-12-29T16:40:12.578Z · EA(p) · GW(p)

The section on expected value theory seemed unfairly unsympathetic to TUA proponents 

  • The question of what we should do with pascal's mugging-type situations just seems like a really hard under-researched problem where there are not yet any satisfying solutions.
  • EA research institutes like GPI have put a hugely disproportionate amount of research into this question, relative to the field of decision theorists. Proponents of TUA, like Bostrom were the first to highlight these problems in the academic literature. 
  •  Alternatives to expected value have received far less attention in the literature and also have many problems
  • eg The solution you propose of having some probability threshold below which we can ignore more speculative risks also has many issues. For instance, this would seem to invalidate many arguments for the rationality of voting or for political advocacy, such as canvassing for Corbyn or Sanders: the expected value of such activities is high even though the payoff is often very low (eg <1 in 10 million in most US states). Advocating for degrowth also seems extremely unlikely to succeed given the aims of governments across the world and the preferences of ordinary voters. 

So, I think framing it as "here is this gaping hole in this worldview" is a bit unfair. Proponents of TUA pointed out the hole and are the main people trying to resolve the problem, and any alternatives also seem to have dire problems.

Replies from: jackmalde, anonymousEA
comment by Jack Malde (jackmalde) · 2022-01-01T08:44:26.165Z · EA(p) · GW(p)

eg The solution you propose of having some probability threshold below which we can ignore more speculative risks also has many issues. For instance, this would seem to invalidate many arguments for the rationality of voting or for political advocacy, such as canvassing for Corbyn or Sanders: the expected value of such activities is high even though the payoff is often very low (eg <1 in 10 million in most US states). Advocating for degrowth also seems extremely unlikely to succeed given the aims of governments across the world and the preferences of ordinary voters.

You seem to assume that  voting / engaging in political advocacy are all obviously important things to do and that any argument that says don't bother doing them falls prey to a reductio ad absurdum,  but it's not clear to me why you think that.

If all of these actions do in fact have incredibly low probability of positive payoff such that one feels they are in a Pascal's Mugging when doing them, then one might rationally decide not to do them.

Or perhaps you are imagining a world in which loads of people stop voting such that democracy falls apart. At some point in this world though I'd imagine voting would stop being a Pascal's Mugging action and would be associated with a reasonably high probability of having a positive payoff.

Replies from: Patrick
comment by Patrick · 2022-01-02T01:27:04.364Z · EA(p) · GW(p)

One reason it might be a reductio ad absurdum is that it suggests that in an election in which supporters of one side were rational (and thus would not vote, since each of their votes would have a minuscule chance of mattering) and the others irrational (and would vote, undeterred by the small chance of their vote mattering), the irrational side would prevail.

If this is the claim that John G. Halstead is referring to, I regard it as a throwaway remark (it's only one sentence plus a citation):

For instance, a simple threshold or plausibility assessment could protect the field’s resources and attention from being directed towards highly improbable or fictional events.

comment by anonymousEA · 2021-12-29T19:06:13.971Z · EA(p) · GW(p)

Which  alternatives to EV have what problems for what uses in what contexts?

Why do those problems make them worse than EV, a tool that requires the use of numerical probabilities for poorly-defined events often with no precedent or useful data?

What makes all alternatives to EV less preferable to the way in which EV is usually used in existential risk scholarship today, where subjectively-generated probabilities are asserted by "thought leaders" with no methodology and no justification, about events that are not rigorously defined nor separable, which are then fed into idealized economic models, policy documents, and press packs?

Replies from: willbradshaw
comment by Will Bradshaw (willbradshaw) · 2021-12-29T20:22:03.048Z · EA(p) · GW(p)

Why is writing a sequence of snarky rhetorical questions preferable to just making counter-arguments?

Replies from: anonymousEA
comment by anonymousEA · 2021-12-29T20:35:55.055Z · EA(p) · GW(p)

The argument is too vague to counter: how do you disprove claims about unspecified problems with unspecified tools in unspecified contexts?

There is no snark in this comment, I am simply stating my views as clearly and unambiguously as possible.

I'd like to add that as someone whose social circle includes both EAs and non-EAS, I have never witnessed reactions as defensive and fragile as those made by some EAs in response to criticism of orthodox EA views. This kind of behaviour simply isn't normal.

Replies from: willbradshaw
comment by Will Bradshaw (willbradshaw) · 2021-12-29T21:37:23.512Z · EA(p) · GW(p)

The argument is too vague to counter: how do you disprove claims about unspecified problems with unspecified tools in unspecified contexts?

Halstead gives one alternative (thresholding) and names some specific problems with it. A productive response that considered this inadequate might have named some others.

I'd like to add that as someone whose social circle includes both EAs and non-EAS, I have never witnessed reactions as defensive and fragile as those made by some EAs in response to criticism of orthodox EA views. This kind of behaviour simply isn't normal.

The original post here is substantially upvoted, as are most posts criticising EA in these general terms. There are comments both supportive and critical of the piece that have received substantial upvotes. The fact that your comments here are being downvoted says more about your approach to commenting than about EAs' receptiveness to criticism.

comment by weeatquince · 2021-12-29T20:48:20.521Z · EA(p) · GW(p)

Everything written in the post above strongly resonates with my own experiences, in particular the following lines:

the creation of this paper has not signalled epistemic health. It has been the most emotionally draining paper we have ever written.

the burden of proof placed on our claims was unbelievably high in comparison to papers which were considered less “political” or simply closer to orthodox views.

The EA community prides itself on being able to invite and process criticism. However, warm welcome of criticism was certainly not our experience in writing this paper.

I think criticism of EA orthodoxy is routinely dismissed. I would like to share a few more stories of being publicly critical of EA in the hope that doing so adds some useful evidence to the discussion:

  • Consider systemic change. "Some critics of effective altruism allege that its proponents have failed to engage with systemic change" (source [? · GW]). I have always found the responses (eg here and here [EA · GW]) to this critique to be dismissive and miss the point [EA · GW]. Why can we not just say: yes we are a new community this area feels difficult and we are not there yet? Why do we have to pretend EA is perfect and does systemic change stuff well?
  • My own experience (risk planning). I have some relevant expertise from engaging with professional risk managers, military personnel, counterterrorism staff and so on.  I have really rally struggled to communicate any of this to EA folk, especially where it suggest that EAs are not thinking about risks well. I tend to find I get downvoted [EA(p) · GW(p)] or told I am strawmanning EA [EA · GW]. If I want to avoid it is is possible [EA · GW] if I put huge amounts of time and mental energy.
  • Mental health. Consider that Michael Plant [EA · GW] has, for 6 years now, been making the case that GiveWell and other neartermist EAs don’t put enough weight on mental health. I believe his experience is mostly one of feeling that people are dismissive rather than engage with him.
  • Other. A few years back I remember being unimpressed [EA · GW] that EAs response to Iason Gabriel's critique was largely to argue back and ignore it. There was no effort to see if any of the criticisms contained useful grains of truth that could help us improve EA.

Other evidence to note is that the top thing that EAs thinks other EAs get (source [EA · GW]) is "Reinventing the wheel" and "Overconfidence and misplaced difference" and many EAs worry that  EA is intellectually stale / stagnant (in answers to this question [EA · GW]). On the other hand many EA orgs are very good at recognising their mistakes made (e.g with 'Our mistakes' pages), which is a great cultural thing that we as a community should be proud of.

 

I think we should also recognise that Both Carla and Luke have full time EA research jobs and they have found it time consuming and for someone without a full time position it can become almost impossibly time consuming and draining to do a half decent job. This essentially closes off a lot of people from critiquing EA.

 

If there was one change I would make I would like there to be a cultural shift so if someone posts something critical we try to steelman it rather than dismiss it. (Here is an example [EA(p) · GW(p)] of steelmanning some of Phil Torres' arguments [edit: although we should of course not knowingly steelman/endorse arguments made in bad faith]). We could also on occasion say "yes we get this wrong and we still have much to learn" and not treat every critique as an attack. 

 

Hope some extra views help.

Replies from: Halstead, freedomandutility, anonymousEA
comment by John G. Halstead (Halstead) · 2021-12-29T21:10:15.569Z · EA(p) · GW(p)

i do think there is a difference between this article and stuff from people like Torres, in terms of good faith

Replies from: Pablo_Stafforini, weeatquince
comment by Pablo (Pablo_Stafforini) · 2021-12-29T21:41:27.312Z · EA(p) · GW(p)

I agree with this, and would add that the appropriate response to arguments made in bad faith is not to "steelman" them (or to add them to a syllabus, or to keep disseminating [EA(p) · GW(p)] a cherry-picked quote from a doctoral dissertation), but to expose them for what they are or ignore them altogether. Intellectually dishonesty is the epistemic equivalent of defection in the cooperative enterprise of truth-seeking; to cooperate with defectors is not a sign of virtue, but quite the opposite.

Replies from: aarongertler, weeatquince
comment by Aaron Gertler (aarongertler) · 2021-12-30T21:06:58.937Z · EA(p) · GW(p)

I've seen "in bad faith" used in two ways:

  1. This person's argument is based on a lie.
  2. This person doesn't believe their own argument, but they aren't lying within the argument itself.

While it's obvious that we should point out lies where we see them, I think we should distinguish between (1) and (2). An argument's original promoter not believing it isn't a reason for no one to believe it, and shouldn't stop us from engaging with arguments that aren't obviously false.

(See this comment [EA(p) · GW(p)] for more.)

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2021-12-31T00:52:48.273Z · EA(p) · GW(p)

I agree that there is a relevant difference, and I appreciate your pointing it out. However, I also think that knowledge of the origins of a claim or an argument is sometimes relevant for deciding whether one should engage seriously with it, or engage with it at all, even if the person presenting it is not himself/herself acting in bad faith. For example, if I know that the oil or the tobacco industries funded studies seeking to show that global warming is not anthropogenic or that smoking doesn't cause cancer, I think it's reasonable to be skeptical  even if the claims or arguments contained in those studies are presented by a person unaffiliated with those industries. One reason is that the studies may consist of filtered evidence [? · GW]—that is, evidence selected to demonstrate a particular conclusion, rather than to find the truth. Another reason is that by treating arguments skeptically when they originate in a non-truth-seeking process, one disincentivizes that kind of intellectually dishonest and socially harmful behavior.

In the case at hand, I think what's going on is pretty clear. A person who became deeply hostile to longtermism (for reasons that look prima facie mostly unrelated to the intellectual merits of those views) diligently went through most of the longtermist literature fishing for claims that would, if presented in isolation to a popular audience using technically true but highly tendentious or misleading language and/or stripped of the relevant context, cause serious damage to the longtermist movement. In light of this, I think it is not only naive but epistemically unjustified to insist that this person's findings be assessed on their merits alone. (Again, consider what your attitude would be if the claims originated e.g. in an industry lobbyist.)

In addition, I think that it's inappropriate to publicize this person's writings, by including them in a syllabus or by reproducing their cherry-picked quotes. In the case of Nick Beckstead's quote, in particular, its reproduction seems especially egregious, because it helps promote an image of someone diametrically opposed to the truth: an early Giving What We Can Member who pledged to donate 50% of his income to global poverty charities for the rest of his life is presented—from a single paragraph excerpted from a 180-page doctoral dissertation intended to be read primarily by an audience of professional analytic philosophers—as "support[ing] white supremacist ideology". Furthermore, even if Nick was just an ordinary guy rather than having impeccable cosmopolitan credentials, I think it would be perfectly appropriate to write what he did in the context of a thesis advancing the argument that our moral judgments are less reliable than is generally assumed. More generally, and more importantly, I believe that as EAs we should be willing to question established beliefs related to the cost-effectiveness of any cause, even if this risks reaching very uncomfortable conclusions, as long as the questioning is done as part of a good-faith effort in cause-prioritization and subject to the usual caveats related to possible reputational damage or the spreading of information hazards. It scares me to think what our movement might become if it became an accepted norm that explorations of the sort exemplified by the quote can only be carried out "through a postcolonial lens".

Note: Although I generally oppose disclaimers, I will add one here. I've known Nick Beckstead for a decade or so. We interacted a bit back when he was working at FHI, though after he moved to Open Phil in 2014 we had no further communication, other than exchanging greetings when he visited the CEA office around 2016 and corresponding briefly in a professional capacity. I am also an FTX Fellow, and as I learned recently, Nick has been appointed CEO of the FTX Foundation. However, I made this same criticism ten months ago [EA(p) · GW(p)], way before I developed any ties to FTX (or had any expectations that I would develop such ties or that Nick was being considered for a senior position). Here's what I wrote back then:

I personally do not think it is appropriate to include an essay in a syllabus or engage with it in a forum post when (1) this essay characterizes the views it argues against using terms like 'white supremacy' and in a way that suggests (without explicitly asserting it, to retain plausible deniability) that their proponents—including eminently sensible and reasonable people such as Nick Beckstead and others— are white supremacists, and when (2) its author has shown repeatedly in previous publications, social media posts and other behavior that he is not writing in good faith and that he is unwilling to engage in honest discussion.

Replies from: aarongertler
comment by Aaron Gertler (aarongertler) · 2022-01-01T11:31:55.464Z · EA(p) · GW(p)

 One reason is that the studies may consist of filtered evidence [? · GW]—that is, evidence selected to demonstrate a particular conclusion, rather than to find the truth. Another reason is that by treating arguments skeptically when they originate in a non-truth-seeking process, one disincentivizes that kind of intellectually dishonest and socially harmful behavior.

The "incentives" point is reasonable, and it's part of the reason I'd want to deprioritize [EA(p) · GW(p)] checking into claims with dishonest origins. 

However, I'll note that establishing a rule like "we won't look at claims seriously if the person making them has a personal vendetta against us" could lead to people trying to argue against examining someone's claims by arguing that they have a personal vendetta, which gets weird and messy. ("This person told me they were sad after org X rejected their job application, so I'm not going to take their argument against org X's work very seriously.")

Of course, there are many levels to what a "personal vendetta" might entail, and there are real trade-offs to whatever policy you establish. But I'm wary of taking the most extreme approach in any direction ("let's just ignore Phil entirely").

As for filtered evidence — definitely a concern if you're trying to weigh the totality of evidence for or against something. But not necessarily relevant if there's one specific piece of evidence that would be damning if true. For example, if Phil had produced a verifiable email exchange showing an EA leader threatening to fire a subordinate for writing something critical of longtermism, it wouldn't matter much to me how much that leader had done to encourage criticism in public.

I think it is not only naive but epistemically unjustified to insist that this person's findings be assessed on their merits alone.

I agree with this to the extent that those findings allow for degrees of freedom — so I'll be very skeptical of conversations reported third-hand or cherry-picked quotes from papers, but still interested in leaked emails that seem like the genuine article.

In addition...

No major disagreements with anything past this point. I certainly wouldn't put Phil's white-supremacy work on a syllabus, though I could imagine excerpts of his criticism on other topics making it in — of the type "this point of view implies this objection" rather than "this point of view implies that the person holding it is a dangerous lunatic".

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2022-01-01T13:08:35.238Z · EA(p) · GW(p)

Thanks for the comments. They have helped me clarify my thoughts, though I feel I'm still somewhat confused.

However, I'll note that establishing a rule like "we won't look at claims seriously if the person making them has a personal vendetta against us" could lead to people trying to argue against examining someone's claims by arguing that they have a personal vendetta, which gets weird and messy. ("This person told me they were sad after org X rejected their job application, so I'm not going to take their argument against org X's work very seriously.")

Yes, I agree that this is a concern. I am reminded of an observation by Nick Bostrom:

consider the convention against the use of ad hominem arguments in science and many other arenas of disciplined discussion. The nominal justification for this rule is that the validity of a scientific claim is independent of the personal attributes of the person or the group who puts it forward. Construed as a narrow point about logic, this comment about ad hominem arguments is obviously correct. But it overlooks the epistemic significance of heuristics that rely on information about how something was said and by whom in order to evaluate the credibility of a statement. In reality, no scientist adopts or rejects scientific assertions solely on the basis of an independent examination of the primary evidence. Cumulative scientific progress is possible only because scientists take on trust statements made by other scientists—statements encountered in textbooks, journal articles, and informal conversations around the coffee machine. In deciding whether to trust such statements, an assessment has to be made of the reliability of the source. Clues about source reliability come in many forms—including information about factors, such as funding sources, peer esteem, academic affiliation, career incentives, and personal attributes, such as honesty, expertise, cognitive ability, and possible ideological biases. Taking that kind of information into account when evaluating the plausibility of a scientific hypothesis need involve no error of logic.

Why is it, then, that restrictions on the use of the ad hominem command such wide support? Why should arguments that highlight potentially relevant information be singled out for suspicion? I would suggest that this is because experience has demonstrated the potential for abuse. For reasons that may have to do with human psychology, discourses that tolerate the unrestricted use of ad hominem arguments manifest an enhanced tendency to degenerate into personal feuds in which the spirit of collaborative, reasoned inquiry is quickly extinguished. Ad hominem arguments bring out our inner Neanderthal.

So I recognize both that it is sometimes legitimate (and even required) to refuse to engage with arguments based on how they originated, and that a norm that licenses this behavior has significant abuse potential. I haven't thought about ways in which the norm could be refined, or about heuristics one could adopt to decide when to apply it. I'd like to see someone (Greg Lewis?) investigate this issue more.

As for filtered evidence — definitely a concern if you're trying to weigh the totality of evidence for or against something. But not necessarily relevant if there's one specific piece of evidence that would be damning if true.

I mostly agree. My sense is that we often misclassify as "specific piece[s] of evidence that would be damning if true" things that should be assessed as part of a much larger whole. E.g. it is sometimes relevant to consider the sheer number of things someone has said when deciding how outraged to be that this person said something seemingly outrageous.

comment by weeatquince · 2021-12-29T22:08:23.494Z · EA(p) · GW(p)

Agree with this.

comment by weeatquince · 2021-12-29T22:10:55.905Z · EA(p) · GW(p)

Yes I think that is fair.

At the time (before he wrote his public critique) I had not yet realised that Phil Torres was acting in bad faith.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2021-12-29T22:28:44.622Z · EA(p) · GW(p)

Just to clarify (since I now realize my comment was written in a way that may have suggested otherwise): I wasn't alluding to your attempt to steelman his criticism. I agree that at the time the evidence was much less clear, and that steelmanning probably made sense back then (though I don't recall the details well).

comment by freedomandutility · 2021-12-30T00:40:45.235Z · EA(p) · GW(p)

Strong upvote from me - you’ve articulated my main criticisms of EA.

I think it’s particularly surprising that EA still doesn’t pay much attention to mental health and happiness as a cause area, especially when we discuss pleasure and suffering all the time, Yew Kwang Ng focused so much on happiness, and Michael Plant has collaborated with Peter Singer.

Replies from: aarongertler
comment by Aaron Gertler (aarongertler) · 2021-12-30T09:37:23.548Z · EA(p) · GW(p)

In your view, what would it look like for EA to pay sufficient attention to mental health?

To me, it looks like there's a fair amount of engagement on this:

  • Peter Singer obviously cares about the issue, and he's a major force in EA by himself.
  • Michael Plant's last post [EA · GW] got a positive writeup in Future Perfect and serious engagement from a lot of people on the Forum and on Twitter (including Alexander Berger, who probably has more influence over neartermist EA funding than any other person); Alex was somewhat negative on the post, but at least he read it.
  • Forum posts with the "mental health" tag [? · GW] generally seem to be well-received.
  • Will MacAskill invited three very prominent figures to run an EA Forum AMA [EA · GW] on psychedelics as a promising mental health intervention.
  • Founders Pledge released a detailed cause area report on mental health, which makes me think that a lot of their members are trying to fund this area.
  • EA Global has featured several talks on mental health.

I can't easily find engagement with mental health from Open Phil or GiveWell, but this doesn't seem like an obvious sign of neglect, given the variety of other health interventions they haven't closely engaged with.

I'm limited here by my lack of knowledge w/r/t funding constraints for orgs like StrongMinds and the Happier Lives Institute. If either org way really funding-constrained, I'd consider them to be promising donation targets for people concerned about global health, but I also think that those people — if they look anywhere outside of GiveWell — have a good chance of finding these orgs, thanks to their strong presence on the Forum and in other EA spaces.

Replies from: MichaelPlant, weeatquince
comment by MichaelPlant · 2022-01-06T18:23:13.822Z · EA(p) · GW(p)

I've only just seen this and thought I should chime in. Before I describe my experience, I should note that I will respond to Luke’s specific concerns about subjective wellbeing separately in a reply to his comment.

TL;DR Although GiveWell (and Open Phil) have started to take an interest in subjective wellbeing and mental health in the last 12 months, I have felt considerable disappointment and frustration with their level of engagement over the previous six years.

I raised the "SWB and mental health might really matter" concerns in meetings with GiveWell staff about once a year since 2015. Before 2021, my experience was that they more or less dismissed my concerns, even though they didn't seem familiar with the relevant literature. When I asked what their specific doubts were, these were vague and seemed to change each time ("we're not sure you can measure feelings", "we're worried about experimenter demand effect", etc.). I'd typically point out their concerns had already been addressed in the literature, but that still didn't seem to make them more interested. (I don't recall anyone ever mentioning 'item response theory', which Luke raises as his objection.) In the end, I got the impression that GiveWell staff thought I was a crank and were hoping I would just go away.

GiveWell’s public engagement has been almost non-existent. When HLI published, in August 2020, a document explaining how GiveWell could (re)estimate their own ‘moral weights’ using SWB [EA · GW], GiveWell didn’t comment on this (a Founders Pledge researcher did, however, provide detailed comments [EA · GW]). The first and only time GiveWell has responded publicly about this was in December 2020, where they set out their concerns [EA · GW]in relation to our cash transfer vs therapy meta-analyses [EA · GW]; I’ve replied to those comments (many of which expressed quite non-specific doubts) but not yet received a follow-up.

The response I was hoping for - indeed, am still hoping for - was the one Will et al. gave above, namely, "We're really interested in serious critiques. What do you think we're getting wrong, why, and what difference would it make if you were right? Would you like us to fund you to work on this?" Obviously, you wouldn't expect an organisation to engage with critiques that are practically unimportant and from non-credible sources. In this case, however, I was raising fundamental concerns that, if true, could substantially alter the priorities, both for GiveWell and EA more broadly. And, for context, at the time I initially highlighted these points I was doing a philosophy PhD supervised by Hilary Greaves and Peter Singer and the measurement of wellbeing was a big part of my thesis.

There has been quite good engagement from other EAs and EAs orgs, as Aaron Gertler notes above. I can add to those that, for instance, Founders Pledge have taken SWB on board in their internal decision-making and have since made recommendations in mental health. However, GiveWell’s lack of engagement has really made things difficult because EAs defer so much to GiveWell: a common question I get is “ah, but what does GiveWell think?" People assume that, because GiveWell didn't take something seriously, that was strong evidence they shouldn't either. This frustration was compounded by the fact that because there isn’t a clear, public statement of what GiveWell’s concerns were, I could neither try to address their concerns nor placate the worries of others by saying something like “GiveWell’s objection is X. We don’t share that because of Y”.

This is pure speculation on my part, but I wonder if GiveWell (and perhaps Open Phil too) developed an 'ugh field' [LW · GW] around subjective wellbeing and mental health. They didn't look into it initially because they were just too damn busy. But then, after a while, it became awkward to start engaging with because that would require admitting they should have done so years ago, so they just ignored it. I also suspect there's been something of an information cascade where someone originally looked at all this (see my reply to Luke above), decided it wasn't interesting, and then other staff members just took that on trust and didn't revisit it - everyone knew an idea could be safely ignored even if they weren't sure why. 

Since 2021, however, things have been much better. In late 2020, as mentioned, HLI published a blog post [EA · GW] showing how SWB could be used to (re)estimate GiveWell's 'moral weights'. I understand that some of GiveWell's donors asked them for an opinion on this and that pushed them to engage with it. HLI had a productive conversation with GiveWell in February 2021 (see GiveWell's notes) where, curiously, no specific objections to SWB were raised. GiveWell are currently working on a blog post responding to our moral weights piece and they kindly shared a draft with us in July asking for our feedback. They’ve told us they plan to publish reports on SWB and psychotherapy in the next 3-6 months.

Regarding Open Phil, it seemed pointless to engage unless GiveWell came on board, because Open Phil also defer strongly to GiveWell's judgements, as Alex Berger has recently stated [EA · GW]. However, we recently had some positive engagement from Alex on Twitter, and a member of his team contacted HLI for advice after reading our report and recommendations on global mental health. Hence, we are now starting to see some serious engagement, but it’s rather overdue and still less fulsome than I’d want.

Replies from: MaxRa
comment by MaxRa · 2022-01-06T21:02:37.738Z · EA(p) · GW(p)

Really sad to hear about this, thanks for sharing. And thank you for keeping at it despite the frustrations. I think you and the team at HLI are doing good and important work.

comment by weeatquince · 2021-12-30T13:56:52.663Z · EA(p) · GW(p)

To me (as someone who has funded the Happier Lives institute) I just think it should not have taken founding an institute and 6 years and of repeating this message (and feeling largely ignored and dismissed by existing EA orgs) to reach the point we are at now.

I think expecting orgs and donors to change direction is certainly a very high bar. But I don’t think we should pride ourselves on being a community that pivots and changes direction when new data (e.g. on subjective wellbeing) is made available to us.

Replies from: lukeprog, aarongertler
comment by lukeprog · 2022-01-01T22:27:49.162Z · EA(p) · GW(p)

FWIW, one of my first projects at Open Phil, starting in 2015, was to investigate subjective well-being interventions as a potential focus area. We never published a page on it, but we did publish some conversation notes. We didn't pursue it further because my initial findings were that there were major problems with the empirical literature, including weakly validated measures, unconvincing intervention studies, one entire literature using the wrong statistical test for decades, etc. I concluded that there might be cost-effective interventions in this space, perhaps especially after better measure validation studies and intervention studies are conducted, but my initial investigation suggested it would take a lot of work for us to get there, so I moved on to other topics.

At least for me, I don't think this is a case of an EA funder repeatedly ignoring work by e.g. Michael Plant — I think it's a case of me following the debate over the years and disagreeing on the substance after having familiarized myself with the literature.

That said, I still think some happiness interventions might be cost-effective upon further investigation, and I think our Global Health & Well-Being team has been looking into the topic again as that team has gained more research capacity in the past year or two.

Replies from: MichaelPlant, weeatquince
comment by MichaelPlant · 2022-01-06T23:14:06.936Z · EA(p) · GW(p)

Hello Luke, thanks for this, which was illuminating. I'll make an initial clarifying comment and then go on to the substantive issues of disagreement.

At least for me, I don't think this is a case of an EA funder repeatedly ignoring work by e.g. Michael Plant — I think it's a case of me following the debate over the years and disagreeing on the substance after having familiarized myself with the literature.

I'm not sure what you mean here. Are you saying GiveWell didn't repeatedly ignore the work? That Open Phil didn't? Something else? As I set out in another comment [EA(p) · GW(p)], my experience with GiveWell staff was of being ignored by people who weren't at that familiar with the relevant literature - FWIW, I don't recall the concerns you raise in your notes being raised with me. I've not had interactions with Open Phil staff prior to 2021 - for those reading, Luke and I have never spoken - so I'm not able to comment regarding that.

Onto the substantive issues. Would you be prepared to more precisely state what your concerns are, and what sort of evidence would chance your mind? Reading your comments and your notes, I'm not sure exactly what your objections are and, in so far as I do, they don't seem like strong objections. 

You mention "weakly validated measures" as an issue but in the text you say "for some scales, reliability and validity have been firmly established", which implies to me you think (some) scales are validated. So which scales are you worried about, to what extent, and why? Are they so non-validated we should think they contain no information? If some scales are validated, why not just use those ones? By analogy, we wouldn't give up on measuring temperature if we thought only some of our thermometers were broken. I'm not sure if we're even on the same page about what it is to 'validate' a measure of something (I can elaborate, if helpful).

On "unconvincing intervention studies", I take it you're referring to your conversation notes with Sonja Lyubormirsky. The 'happiness interventions' you talk about are really just those from the field of 'positive psychology' where, basically, you take mentally healthy people and try to get them to change their thoughts and behaviours to be happier, such as by writing down what they're grateful for. This implies a very narrow interpretation of 'happiness interventions'. Reducing poverty or curing diseases are 'happiness interventions' in my book because they increase happiness, but they are certainly not positive psychology interventions. One can coherently think that subjective wellbeing measures, eg self-reported happiness, are valid and capture something morally important but deny gratitude journalling etc. are particularly promising ways, in practice, of increasing it. Also, there's a big difference between the lab-style experiments psychologists run and the work economists tend to do looking at large panel and cohort data sets.

Regarding  "one entire literature using the wrong statistical test for decades", again, I'm not sure exactly what you mean. Is the point about 'item response theory'? I confess that's not something that gets discussed in the academic world of subjective wellbeing measurement - I don't think I've ever heard it mentioned. After a quick look, it seems to be a method to relate scores of psychometric tests to real-world performance. That seems to be a separate methodological ballgame from concerns about the relationship between how people feel and how they report those feelings on a numerical scale, e.g. when we ask "how happy are you, 0-10?". Subjective wellbeing researchers do talk about the issue of 'scale cardinality', ie, roughly, does your "7/10" feel the same to you as my "7/10" does to me? This issue has been starting to get quite a bit of attention in just the last couple of years but has, I concede, been rather neglected by the field. I've got a working paper on this under review which is (I think) the first comprehensive review of the problem.

To me, it looks like in your initial investigation you had the bad luck to run into a couple of dead ends and, quite understandably given those, didn't go further. But I hope you'll let me try to explain further to you why I think happiness research (like happiness itself) is worth taking seriously!

Replies from: lukeprog
comment by lukeprog · 2022-01-07T15:24:15.333Z · EA(p) · GW(p)

Hi Michael,

I don't have much time to engage on this, but here are some quick replies:

  • I don't know anything about your interactions with GiveWell. My comment about ignoring vs. not-ignoring arguments about happiness interventions was about me / Open Phil, since I looked into the literature in 2015 and have read various things by you since then. I wouldn't say I ignored those posts and arguments, I just had different views than you about likely cost-effectiveness etc.
  • On "weakly validated measures," I'm talking in part about lack of IRT validation studies for SWB measures used in adults (NIH funded such studies for SWB measures in kids but not adults, IIRC), but also about other things. The published conversation notes only discuss a small fraction of my findings/thoughts on the topic.
  • On "unconvincing intervention studies" I mean interventions from the SWB literature, e.g. gratitude journals and the like. Personally, I'm more optimistic about health and anti-poverty interventions for the purpose of improving happiness.
  • On "wrong statistical test," I'm referring to the section called "Older studies used inappropriate statistical methods" in the linked conversation notes with Joel Hektner.

TBC, I think happiness research is worth engaging and has things to teach us, and I think there may be some cost-effectiveness happiness interventions out there. As I said in my original comment, I moved on to other topics not because I think the field is hopeless, but because it was in a bad enough state that it didn't make sense for me to prioritize it at the time.

Replies from: MichaelPlant
comment by MichaelPlant · 2022-01-10T19:16:07.173Z · EA(p) · GW(p)

Hello Luke,

Thanks for this too. I appreciate you've since moved on to other things, so this isn't really your topic to engage on anymore. However, I'll make two comments.

First, you said you read various things in the area, including by me, since 2015. It would have been really helpful (to me) if, given you had different views, you had engaged at the time and set out where you disagreed and what sort of evidence would have changed your mind.

Second, and similarly, I would really appreciate it if the current team at Open Philanthropy could more precisely set out their perspective on all this.  I did have a few interactions with various Open Phil staff in 2021,  but I wouldn't say I've got anything like canonical answers on what their reservations are about 1. measuring outcomes in terms of SWB  - Alex Berger's recent technical update [EA · GW] didn't comment on this - and 2.  doing more research or grantmaking into the things that, from the SWB perspective, seem overlooked.

Replies from: david_reinstein
comment by david_reinstein · 2022-01-23T04:35:43.544Z · EA(p) · GW(p)

This is an interesting conversation. It’s veering off into a separate topic. I wish there was a way to “rebase” these spin-off discussions into a different place. For better organisation.

comment by weeatquince · 2022-01-02T00:47:20.119Z · EA(p) · GW(p)

Thank you Luke – super helpful to hear!!

comment by Aaron Gertler (aarongertler) · 2022-01-01T12:24:39.072Z · EA(p) · GW(p)

Do you feel that existing data on subjective wellbeing is so compelling that it's an indictment on EA for GiveWell/OpenPhil not to have funded more work in that area? (Founder's Pledge released their report in early 2019 and was presumably working on it much earlier, so they wouldn't seem to be blameworthy.)

I can't say much more here without knowing the details of how Michael/others' work was received when they presented it to funders. The situation I've outlined seems to be compatible both with "this work wasn't taken seriously enough" and "this work was taken seriously, but seen as a weaker thing to fund than the things that were actually funded" (which is, in turn, compatible with "funders were correct in their assessment" and "funders were incorrect in their assessment"). 

That Michael felt dismissed is moderate evidence for "not taken seriously enough". That his work (and other work like it) got a bunch of engagement on the Forum is weak evidence for "taken seriously" (what the Forum cares about =/= what funders care about, but the correlation isn't 0). I'm left feeling uncertain about this example, but it's certainly reasonable to argue that mental health and/or SWB hasn't gotten enough attention.

(Personally, I find the case for additional work on SWB more compelling than the case for additional work on mental health specifically, and I don't know the extent to which HLI was trying to get funding for one vs. the other.)

Replies from: weeatquince
comment by weeatquince · 2022-01-02T00:13:36.686Z · EA(p) · GW(p)

Do you feel that existing data on subjective wellbeing is so compelling that it's an indictment on EA for GiveWell/OpenPhil not to have funded more work in that area?

Tl;dr. Hard to judge. Maybe: Yes for GW. No for Open Phil. Mixed for EA community as a whole.

 

I think I will slightly dodge the question and answer the separate question – are these orgs doing enough exploratory type research. (I think this is a more pertinent question, and although I think subjective wellbeing is worth looking into as an example it is not clear it is at the very top of the list of things to look into more that might change how we think about doing good). 

Firstly to give a massive caveat that I do not know for sure. It is hard to judge and knowing exactly how seriously various orgs have looked into topics is very hard to do from the outside. So take the below with a pinch of salt. That said:

  • OpenPhil – AOK.
    • OpenPhil (neartermists) generally seem good at exploring new areas and experimenting (and as Luke highlights, did look into this).
  • GiveWell – hmmm could do better.
    • GiveWell seem to have a pattern of saying they will do more exploratory research (e.g. into policy) and then not doing it (mentioned here [EA · GW], I think 2020 has seen some but minimal progress).
    • I am genuinely surprised GiveWell have not found things better than anti-malaria and deworming (sure, there are limits on how effective scalable charities can be but it seems odd our first guesses are still the top recommended).
    • There is limited catering to anyone who is not a classical utilitarian – for example if you care about wellbeing (e.g. years lived with disability) but not lives saved it is unclear where to give.
  • EA in general – so-so.
    • There has been interest from EAs (individuals, Charity Entrepreneurship, Founders Pledge, EAG) on the value of happiness and addressing mental health issues, etc.
    • It is not just Michael. I get the sense the folk working on Improving Institutional Decision Making (IIDM) have struggled to get traction and funding and support too. (Although maybe promoters of new causes areas within EA always feel their ideas are not taken seriously.)
    • The EA community (not just GiveWell) seems very bad at catering to folk who are not roughly classical (or negative leaning) utilitarians (a thing I struggled with when working as a community builder).
    • I do believe there is a lack of exploratory research happening given the potential benefits (see here [EA · GW] and here). Maybe Rethink are changing this.

Not sure I really answered the question. And anyway none of those points are very strong evidence as much as me trying to explain my current intuitions. But maybe I said something of interest.

comment by anonymousEA · 2021-12-29T21:00:26.512Z · EA(p) · GW(p)

We could also on occasion say "yes we get this wrong and we still have much to learn" and not treat every critique as an attack.

Strong upvote for this if nothing else.

(the rest is also brilliant though, thank you so much for speaking up!)

comment by John G. Halstead (Halstead) · 2021-12-28T17:13:26.266Z · EA(p) · GW(p)

A few thoughts on the democracy criticism. Don't a lot of the criticisms here apply to the IPCC? "A homogenous group of experts attempting to directly influence powerful decision-makers is not a fair or safe way of traversing the precipice."  IPCC contributors are disproportionately white very well-educated males in the West who are much more environmentalist than the global median voter, i.e. "unrepresentative of humanity at large and variably homogenous in respect to income, class, ideology, age, ethnicity, gender, nationality, religion, and professional background." So, would you propose replacing the IPCC with something like a citizen's assembly of people with no expertise in climate science or climate economics, that is representative wrt some of the demographic features you mention? 

You say that decisions about which risks to take should be made democratically. The implication of this seems to be that everyone, and not just EAs, who is aiming to do good with their resources should donate only to their own government. Their govt could then decide how to spend the money democratically. Is that implication embraced? This would eg include all climate philanthropy, which is now at $5-9bn per year.

Replies from: Halstead, JamesOz, CarlaZoeC, Davidmanheim, JamesOz, anonea2021, MatthewDahlhausen, Guy Raveh
comment by John G. Halstead (Halstead) · 2021-12-28T18:13:58.819Z · EA(p) · GW(p)

You seem to assume that we should be especially suspicious of a view if it is not held by a majority of the global population. Over history, the views of the global majority seem to me to have been an extremely poor guide to accurate moral beliefs. For example, a few hundred years ago, most people had abhorrent views about  animals, women and people of other races. By the arguments here, do you think that people like Benjamin Lay, Bentham and Mill should not have advocated for change in these areas, including advocating for changes in policy? 

Replies from: Davidmanheim, anonea2021, JamesOz, Dr. David Mathers, jackva
comment by Davidmanheim · 2021-12-28T21:41:39.376Z · EA(p) · GW(p)

As I said in a different but related context earlier this week, "If a small, non-representative group disagrees with the majority of humans, we should wonder why, and given base rates and the outside view, worry about failure modes that have affected similar small groups in the past."

Replies from: Halstead
comment by John G. Halstead (Halstead) · 2021-12-28T22:09:22.554Z · EA(p) · GW(p)

I do think we should worry about failure modes and being wrong. But I think the main reason to do that is that people are often wrong, they are bad at reasoning, and subject to a host of biases. The fact that we are in a minority of the global population is an extremely weak indicator of being wrong. The majority has been gravely wrong on many moral and empirical questions in the past and today. It's not at all clear that the base rate of being wrong for 'minority view' vs 'majority view' is higher or not, and that question is extremely difficult to answer because there are lots of ways of slicing up the minority you are referring to.

Replies from: Owen_Cotton-Barratt, Davidmanheim
comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) · 2021-12-28T23:24:50.755Z · EA(p) · GW(p)

I feel like there's just a crazy number of minority views (in the limit a bunch of psychoses held by just one individual), most of which must be wrong. We're more likely to hear about minority views which later turn out to be correct, but it seems very implausible that the base rate of correctness is higher for minority views than majority views.

On the other hand I think there's some distinction to be drawn between "minority view disagrees with strongly held majority view" and "minority view concerns something that majority mostly ignores / doesn't have a view on".

Replies from: Halstead
comment by John G. Halstead (Halstead) · 2021-12-29T14:19:10.111Z · EA(p) · GW(p)

that is a fair point. departures from global majority opinion still seems like a pretty weak 'fire alarm' for being wrong. Taking a position that is eg contrary to most experts on a topic would be a much greater warning sign. 

comment by Davidmanheim · 2021-12-29T07:42:09.386Z · EA(p) · GW(p)

I see how this could be misread. I'll reformulate the statement; 
"If our small, non-representative group comes to a conclusion, we should wonder, given base rates about correctness in general and the outside view, about which failure modes have affected similar small groups in the past, and consider if they apply, and how we might be wrong or misguided."

So yes, errors are common to all groups, and being a minority isn't a indicator of truth, which I mistakenly implied. But the way in which groups are wrong is influenced by group-level reasoning fallacies and biases, which are a product of both individual fallacies and characteristics of the group. That's why I think that investigating how previous similar groups failed seems like a particularly useful way to identify relevant failure modes.

Replies from: Halstead
comment by John G. Halstead (Halstead) · 2021-12-29T09:32:19.556Z · EA(p) · GW(p)

yes I agree with that. 

comment by anonea2021 · 2021-12-28T19:42:02.584Z · EA(p) · GW(p)

I think it's simplistic to reduce the critique to "minority opinion bad". At the very least, you need to reduce it to "minority opinion which happens to reinforce existing power relations and is mainly advocated by billionaires and those funded by it bad". Bentham argued for diminishing his own privilege over others, to give other people MORE choice, irrespective of their power and wealth and with no benefit to him. There is a difference imo

Replies from: Halstead, ESRogs
comment by John G. Halstead (Halstead) · 2021-12-28T22:05:49.901Z · EA(p) · GW(p)

My argument here is about whether we should be more suspicious of a view if it is held by the majority or the minority. Whether that is true seems to me to be mainly dependent on the object-level quality of the belief and not whether it is held by the majority or not - that is a very weak indicator, as the examples of slavery, women, racism, homosexuality etc illustrate. 

I don't think your piece argues that TUA reinforces existing power relations. The main things that proponents of TUA have diverted resources to are: engineered pandemics, AI alignment, nuclear war and to a lesser extent climate change. How does any of this entrench existing power relations?

nitpick, but it is also not true that the view you criticise is mainly advocated for by billionaires. Obviously, a tiny minority of billionaires are longtermists and a tiny minority of longtermists are billionaires. 

Replies from: Guy Raveh
comment by Guy Raveh · 2021-12-29T08:43:35.058Z · EA(p) · GW(p)

The main things that proponents of TUA have diverted resources to are: engineered pandemics, AI alignment, nuclear war and to a lesser extent climate change. How does any of this entrench existing power relations?

This is moving money to mostly wealthy, Western organisations and researchers, that would've otherwise gone to the global poor. So the counterfactual impact is of entrenching wealth disparity.

Replies from: Halstead
comment by John G. Halstead (Halstead) · 2021-12-29T09:38:04.204Z · EA(p) · GW(p)

I think it is very unclear whether it is true that diverting money to these organisations would entrench wealth disparity. Examining the demographics of the organisations funded is  a faulty way to assess the overall effect on global wealth inequality - the main effect these organisations will have is via the actions they take rather than the take home pay of their staff.  

Consider pandemic risk. Open Phil has been the main funder in this space for several years and if they had their way, the world would have been much better prepared for covid. Covid has been a complete disaster for low and middle-income countries, and has driven millions into extreme poverty. I don't think the net effect of pandemic preparedness funding is bad for the global poor. Similarly, with AI safety, if you actually believe that transformative AI will arrive in 20 years, then ensuring the development of transformative AI goes well is extremely consequential for people in low and middle-income countries. 

Replies from: Guy Raveh
comment by Guy Raveh · 2021-12-29T10:15:54.922Z · EA(p) · GW(p)

I did not mean the demographic composition of organisations to be the main contributor to their impact. Rather, what I'm saying is that that is the only impact we can be completely sure of. Any further impact depends on your beliefs regarding the value of the kind of work done.

I personally will probably go to the EA Long Term Future Fund for funding in the not so distant future. My preferred career is in beneficial AI. So obviously I believe the work in the area has value that makes it worth putting money into.

But looking at it as an outsider, it's obvious that I (Guy) have an incentive to evaluate that work as important, seeing as I may personally profit from that view. Rather, if you think AI risk - or even existential risk as a whole - is some orders of magnitude less important than it's laid out to be in EA - then the only straightforward impact of supporting X-risk research is in who gets the money and who does not. If you think any AI research is actually harmful, then the expected value of funding this is even worse.

comment by ESRogs · 2021-12-31T00:18:06.485Z · EA(p) · GW(p)

opinion which ... is mainly advocated by billionaires

Do you mean that most people advocating for techno-positive longtermist concern for x-risk are billionaires, or that most billionaires so advocate?

I don't think either claim is true (or even close to true).

Replies from: anonymousEA
comment by anonymousEA · 2021-12-31T03:51:52.850Z · EA(p) · GW(p)

It's also not the claim being made:

...minority opinion which happens to reinforce existing power relations and is mainly advocated by billionaires and those funded by [them]...

Replies from: ESRogs
comment by ESRogs · 2022-01-05T21:30:35.896Z · EA(p) · GW(p)

You're right, my mistake.

comment by James Ozden (JamesOz) · 2021-12-28T19:25:06.770Z · EA(p) · GW(p)

I had the same reaction as this, in that the dominant worldview today views extreme levels of animal suffering as acceptable but most of us would agree it's not, and believe we should do our utmost to change it.

I think the difference between the examples you've mentioned and the parallel to existential risk is with the qualifier Luke and Carla provided in the text (emphasis mine):

Tying the study of a topic that fundamentally affects the whole of humanity to a niche belief system championed mainly by an unrepresentative, powerful minority of the world is undemocratic and philosophically tenuous

Where the key difference is that the study of existential risk is tied to the fate of humanity in ways that animal welfare, misogyny and racism aren't (arguably the latter two examples might influence the direction of humanity significantly but probably not whether humanity ceases to exist). 

I'm not necessarily convinced that existential risk studies is so different to the examples you've mentioned that we need to approach it in a much more democratic way but I do think the qualifiers given by the authors mean the analogies you've drawn aren't that water-tight.

comment by Dr. David Mathers · 2021-12-28T23:55:33.453Z · EA(p) · GW(p)

Most whites had abhorent views on race at certain points in the past (probably not before 1500 though, unless Medieval antisemitism counts) but that is weak evidence that most people did, since whites were always a minority. I'm not sure many of us know what if any racial views people held in Nigeria, Iran, China or India in 1780.

Replies from: Guy Raveh, Halstead
comment by Guy Raveh · 2021-12-29T08:52:24.059Z · EA(p) · GW(p)

I seem to remember learning about rampant racism in China helping to cause the Taiping rebellion? And there are enormous amounts of racism and sectarianism today outside Western countries - look at the Rohingya genocide, the Rwanda genocide, the Nigerian civil war, the current Ethiopian civil war, and the Lebanese political crisis for a few examples.

Every one of these examples should be taken with skepticism as this is far outside my area of expertise. But while I agree with the sentiment that we often conflate the history of the world with the history of white people, I'm not sure it's true in this specific case.

Replies from: Dr. David Mathers
comment by Dr. David Mathers · 2021-12-29T13:22:47.217Z · EA(p) · GW(p)

Yeah, you're probably right. It's just I got a strong "history=Western history" vibe from the comment I was responding to, but maybe that was unfair!

comment by John G. Halstead (Halstead) · 2021-12-29T14:22:10.715Z · EA(p) · GW(p)

i'd be pretty surprised if almost everyone didn't have strongly racist views in 1780. Anti-black views are very prevalent in India and China today, as I understand it. eg Gandhi had pretty racist attitudes.

comment by jackva · 2021-12-28T18:58:07.954Z · EA(p) · GW(p)

I think there is a "not" missing: "view if it is held by a majority of the global population."

Replies from: Halstead
comment by John G. Halstead (Halstead) · 2021-12-28T19:33:36.467Z · EA(p) · GW(p)

sorry, yeah corrected

comment by James Ozden (JamesOz) · 2021-12-28T19:35:52.616Z · EA(p) · GW(p)

minor point but I don't think you've described citizen's assemblies in the most charitable way. Yes, it is a representative sortition of the public so they don't necessarily have expertise in any particular field but there is generally a lot of focus on experts from various fields who inform the assembly. So in reality, a citizen's assembly on climate would be a random selection of representative citizens who would be informed/educated by IPCC (or similar) scientists, who would then deliberate amongst themselves to reach their conclusions.These conclusions one would hope would be similar to what the scientists would recommend themselves as it based on information largely provided by them.

For people that might be interested, here is the report of the Climate Assembly (a citizen's assembly on climate commissioned by the UK government) that in my opinion, had some fairly reasonable policy suggestions. You can also watch a documentary about it by the BBC here.

comment by CarlaZoeC · 2021-12-28T22:37:27.438Z · EA(p) · GW(p)

The paper never spoke about getting rid of experts or replacing experts with citzens. So no. 

Many countries now run citizen assemblies on climate change, which I'm sure you're aware of. They do not aim to replace the role of IPCC. 

EA or the field of existential risk cannot be equated with the IPCC. 

To your second point, no this does not follow at all. Democracy as a procedure is not to be equated (and thus limited) to governments that grant you a vote every so often. You will find references to the relevant literature on democratic experimentation in the last section which focusses on democracy in the paper. 

Replies from: Halstead
comment by John G. Halstead (Halstead) · 2021-12-28T22:46:39.622Z · EA(p) · GW(p)

It would help for clarity if I understood your stance on central bank independence. This seems to produce better outcomes but also seems undemocratic. Do you think this would be legitimate?

It still seems like, if I were Gates, donating my money to the US govt would be more democratic than eg spending it on climate advocacy? Is the vision for Open phil that they set up a citizen's assembly that is representative of the global population and have that decide how to spend the money, by majority vote?

Replies from: Davidmanheim, JamesOz
comment by Davidmanheim · 2021-12-29T07:53:59.417Z · EA(p) · GW(p)

As in the discussion above, I think you're being disingenuous by claiming government is "more democratic." 

And if you were Gates, I'd argue that it would be even more democratic to allow the IPCC, which is more globally representative and less dominated by special interests that the US government, to guide where you spend your money than it would to allow the US government to do so. And given how much the Gates foundation engages with international orgs and allows them to guide his giving, I think that "hand it to the US government" would plausibly be a less democratic alternative than the current approach, which seems to be to allow GAVI, the WHO, and the IPCC to suggest where the money can best be spent.

And having Open Phil convene a consensus driven international body on longtermism actually seems somewhat similar to what the CTLR futureproof report co-written by Toby Ord suggests when it says the UK should lead by, "creating and then leading a global extreme risks network," and push for "a Treaty on the Risks to the Future of Humanity." Perhaps you don't think that's a good idea, but I'm unclear why you would treat it as a reductio, except in the most straw-man form.

Replies from: Halstead
comment by John G. Halstead (Halstead) · 2021-12-29T09:56:06.724Z · EA(p) · GW(p)

Hi David, I wasn't being disingenuous. Here, you say "I think you're being disingenuous by claiming government is "more democratic."  In your comment above you say "One way to make things more democratic is to have government handle it, but it's clearly not the only way." Doesn't this grant that having the government decide is more democratic? These statements seem inconsistent. 

So, to clarify before we discuss the idea, is your view that all global climate philanthropy should be donated to the IPCC?

I think there is a difference between having a citizen's assembly decide what to do with all global philanthropic money (which as I understand it, is the implication of the article), and having a citizen's assembly whose express goal is protecting the long-term (which is not the implication of the article). If all longtermist funding was allocated on the first mechanism, then I think it highly likely that funding for AI safety, engineered pandemics and nuclear war would fall dramatically. 

The treaty in the CTLR report seems like a good idea but seems quite different to the idea of democratic control proposed in the article.

Replies from: Davidmanheim
comment by Davidmanheim · 2021-12-29T12:59:48.114Z · EA(p) · GW(p)

Comparing how democratic government is to different things yield different results, because democratic isn't binary. Yes, unitary action by a single actor is less democratic than having government handle things, and no, having the US government handle things is not  clearly more democratic than deferring to the IPCC. But, as I'm sure you know, the IPCC isn't a funding body, nor does it itself fight climate change. So no, obviously climate philanthropy  shouldn't all go to them.
 

I think there is a difference between having a citizen's assembly decide what to do with all global philanthropic money (which as I understand it, is the implication of the article),

No, and clearly you need to go re-read the paper. You also might want to look into the citations Zoe suggested that you read above, about what "democratic" means, since you keep interpreting in the same simplistic and usually incorrect way, as equivalent to having everyone vote about what to do.

The treaty in the CTLR report seems like a good idea but seems quite different to the idea of democratic control proposed in the article.

This goes back to the weird misunderstanding that democratic is binary, and that it always refers to control. First, global engagement and the treaty are two different things they advise the UK government. Second, I'm sure that the authors can say for themselves whether they see international deliberations and treaties as a way to more democratic input, but I'd assume that they would say that it's absolutely a step in the direction they are pointing towards.

Replies from: Halstead
comment by John G. Halstead (Halstead) · 2021-12-29T14:47:05.121Z · EA(p) · GW(p)

Hi David. We were initially discussing whether giving the money to govts would be more democratic. You suggested this was a patently mad idea but then seemed to agree with it. 

Here is how the authors define democracy: "We understand democracy here in accordance with Landemore as the rule of the cognitively diverse many who are entitled to equal decision-making power and partake in a democratic procedure that includes both a deliberative element and one of preference aggregation (such as majority voting)"

You say: "You also might want to look into the citations Zoe suggested that you read above, about what "democratic" means, since you keep interpreting in the same simplistic and usually incorrect way, as equivalent to having everyone vote about what to do."

equal political power and preference aggregation entails majority rule or lottery voting or sortition. Your own view that equal votes aren't a necessary condition of democracy seems to be in tension with the authors of the article. 

A lot of the results showing the wisdom of democratic procedures depend on certain assumptions especially about voters not being systematically biased. In the real world, this isn't true so sometimes undemocratic procedures can do better. Independent central banks are one example, as is effective  philanthropy. 

For context, I have read a lot of this literature on democracy and did my doctoral thesis on the topic. I argued here [EA · GW] that few democratic theorists actually endorse these criticisms of philanthropy. 

Replies from: Davidmanheim
comment by Davidmanheim · 2021-12-29T18:33:10.700Z · EA(p) · GW(p)

You're using a word differently than they explicitly say they are using the same word. I agree that it's confusing,  but will again note that consensus decision making is democratic in thes sense they use, and yet is none of the options you mention. (And again, the IPCC is a great example of a democratic deliberative body which seems to fulfill the criteria you've laid out, and it's the one they cite explicitly.)

On the validity and usefulness of democracy as a method of state governance, you've made a very reasonable case that it would be ineffective for charity, but in the more general sense that Landemore uses it, which includes how institutions other than governments can account for democratic preferences, I'm not sure that the same argument applies.

That said, I strongly disagree with Cremer and Kemp about the usefulness of this approach on very different grounds. I think that both consensus and other democratic methods, if used for funding, rather than for governance, would make hits based giving and policy entrepreneurship impossible, not to mention being fundamentally incompatible with finding neglected causes.

comment by James Ozden (JamesOz) · 2021-12-29T00:18:49.560Z · EA(p) · GW(p)

I think your Open Phil example could be an interesting experiment. Do you think that if Open Phil commissions a citizen's assembly to allocate their existential risk spending  and the input  is given by their researchers / program officers, it would be wildly different to what they would do themselves?

In any scenario, I think it would be quite interesting as surely if our worldviews and reasoning are strong enough to claim big unusual things (e.g. strong longtermism) we should be able to convince a random group of people that they hold? and if not, is that a problem with the people selected, our communication skills or the thinking itself? I personally don't think it would be a problem with the people (see past successes of citizen's assemblies)* so shouldn't we be testing our theories to see if they make sense under different worldviews and demographic backgrounds? and if they don't seem robust to other people, we should probably try integrate the reasons why (within reason of course).

*there's probably some arguments to be made here that we don't necessarily expect the allocation from this representative group, even when informed  perfectly by experts, to be the optimal allocation of resources so we're not maximising utility / doing the most good. This is probably true but I guess the balance of this with moral uncertainty is the trade-off we have to live with? Quite unsure on this though, seems fuzzy

Replies from: Halstead
comment by John G. Halstead (Halstead) · 2021-12-29T10:08:43.097Z · EA(p) · GW(p)

Hi James, I do think it would be interesting to see what a true global citizen's assembly with complete free rein would decide. I would prefer that the experiment were not done with Open Phil's money as the opportunity cost would be very high. A citizen's assembly with longtermist aims would also be interesting, but would be different to what is proposed in the article. Pre-setting the aims of such an assembly seems undemocratic.

I would be pretty pessimistic about convincing lots of people of something like longtermism in a citizen's assembly - at least I think funding for things like AI, engineered viruses and nuclear war would fall a fair amount. The median global citizen is someone who is strongly religious, probably has strong nationalist and socialist beliefs (per the literature on voter preferences in rich countries, which is probably true in poorer countries), unwilling to pay high carbon taxes, homophobic etc. 

Replies from: JamesOz
comment by James Ozden (JamesOz) · 2021-12-29T17:32:34.220Z · EA(p) · GW(p)

For what it's worth, I wasn't genuinely saying we should hold a citizen's assembly to decide what we do with all of Open Phil's money, I just thought it was an interesting thought experiment. I'm not sure I agree that the pre-setting of the aims of an assembly is undemocratic, however, as surely all citizen's assemblies need an initial question to start from? That seems to have been the case for previous assemblies (climate, abortion, etc.).

To play devil's advocate, I'm not sure your points about the average global citizen being homophobic, religious, socialist, etc., actually matter that much when it comes to people deciding where they should allocate funding for existential risk. I can't see any relationship between beliefs in which existential risks are the most severe and queer people, religion or their willingness to pay carbon taxes (assuming the pot of funding they allocate is fixed and doesn't affect their taxes). 

Also, I don't think you've given much convincing evidence that a citizen's assemblies would lead to funding for key issues falling a fair amount vs decisions by OP program officers, besides your intuition. I can't say I have much evidence myself except for the studies (1, 2, 3 to a degree) provided in the report, would suggest the exact opposite, in that a diverse group of actors performs better than an higher-ability solo actor. In addition, if we base the success of the citizen's assembly on how well they match our current decisions (e.g. the same amount of biorisk, nuclear and AI funding), I think we're missing the point a bit. This assumes we've got it all perfectly allocated currently which I think is a central challenge of the paper above, in that it's probably allocated perfectly according to a select few people but this by no means leads to it actually being true.

Replies from: GMcGowan
comment by GMcGowan · 2021-12-29T17:52:11.157Z · EA(p) · GW(p)

I'm not sure your points about the average global citizen being homophobic, religious, socialist, etc., actually matter that much when it comes to people deciding where they should allocate funding for existential risk

I vaguely remember reading something about religious people worrying less about extinction, but I don't remember whether that was just intuition or an actual study. They may also be predisposed to care less about certain kinds of risk, e.g. not worrying about AI as they perceive it to be impossible.

(these are pretty minor points though)

comment by Davidmanheim · 2021-12-28T21:45:07.871Z · EA(p) · GW(p)

I think you're unaware of the diversity and approach of the IPCC. It is incredibly interdisciplinary, consensus driven, and represents stakeholders around the world faithfully. You should look into what they do and their process more carefully before citing them as an example.

Then, you conflated "democratically" with "via governments, through those government's processes" which is either a bizarre misunderstanding, or a strange rhetorical game you're playing with terminology.

Replies from: Halstead
comment by John G. Halstead (Halstead) · 2021-12-28T22:15:14.701Z · EA(p) · GW(p)

As mentioned, the vast majority of the authors are from a similar demographic background to EAs. The IPCC also produces lots of policy-relevant material on eg the social costs of climate change and the best paths to mitigation, which are mainly determined by white males.

Here is a description of climate philanthropy as practiced today in the United States. Lots of  unelected rich people who disproportionately care about climate change spend hundreds of millions of pounds advocating for actions and policies that they prefer. It would be a democratic improvement to have that decision made by the US government, because at least politicians are subject to competitive elections. So, having the decision made by the US government would be more democratic. Which part of this do you disagree with?

It seems a bit weird to class this as a 'bizarre misunderstanding' since many of the people who make the democracy criticism of philanthropy, such as Rob Reich, do in fact argue that the money should be spent by the government.   

Replies from: Davidmanheim
comment by Davidmanheim · 2021-12-29T07:35:44.848Z · EA(p) · GW(p)

"the vast majority of the authors are from a similar demographic background to EAs... mainly determined by white males"

A key difference is having both representation of those with other perspectives and interests, and a process which is consensus driven and inclusive.

"It would be a democratic improvement to have that decision made by the US government, because at least politicians are subject to competitive elections. So, having the decision made by the US government would be more democratic. Which part of this do you disagree with?"

One way to make things more democratic is to have government handle it, but it's clearly not the only way. Another way to be more democratic would, again, by being more broadly representative and consensus driven. (And the switch from "IPCC" to "climate philanthropy as practiced today in the United States" was definitely a good rhetorical trick, but it wasn't germane to either the paper's discussion of the IPCC, or your original point, so I'm not going to engage in discussing it.)

Replies from: Halstead
comment by John G. Halstead (Halstead) · 2021-12-29T09:41:25.893Z · EA(p) · GW(p)

in the second bit, I wasn't talking about the IPCC, I was talking about your second point "you conflated "democratically" with "via governments, through those government's processes"". The reason I mentioned climate philanthropy was because that is what I mentioned in my original comment you responded to: if you think philanthropy is undemocratic, then that also applies to climate philanthropy, which Luke Kemp is strongly in favour of, so this is an interesting test case for their argument. 

Replies from: Davidmanheim
comment by Davidmanheim · 2021-12-29T13:27:03.246Z · EA(p) · GW(p)

First, are you backing away from your initial claims about the IPCC, since it in fact is consensus based with stakeholders rather than being either a direct democracy, or a unilateralist  decision.

Second, I'm not interested in debating what you say Luke Kemp thinks about climate philanthropy, nor do I know anything about his opinions, nor it is germane to this discussion.
But in your claims that you say are about his views, you keep claiming and assuming that the only democratic alternatives to whatever we're discussing are a direct democracy or control by a citizens' assembly (without expertise) or handing things to governments. Regardless of Luke's views elsewhere, that's certainly not what they meant in this paper. Perhaps this quote will be helpful;

We understand democracy here in accordance with Landemore as the rule of the cognitively diverse many who are entitled to equal decision-making power and partake in a democratic procedure that includes both a deliberative element and one of preference aggregation (such as majority voting)

As Landemore, who the paper cites several times,  explains, institutions work better when the technocratic advice is within the context of a inclusive decision procedure, rather than either having technocrats in charge, or having a direct democracy.

Replies from: Halstead, Pablo_Stafforini
comment by John G. Halstead (Halstead) · 2021-12-29T14:33:24.313Z · EA(p) · GW(p)

Hello, Yes i think it would be fair to back away a bit from the claims about the IPCC. it remains true that most climate scientists and economists are white men and they have a disproportionate influence on the content of the IPCC reports. nonetheless, the case was not as clear cut as I initially suggested. 

I find the second point a bit strange. Isn't it highly relevant to understand whether the views of the author of the piece we are discussing are consistent or not? 

It's also useful to know what the implication of the ideas are expressed actually are. They explicitly give a citizen's assembly as an example of a democratic procedure. Even if it is some other deliberative mechanism followed by a majority vote, I would still like to know what they think about stopping all climate philanthropy and handing decisions over all money over to such a body. It's pretty hard to square a central role for expertise with a normative idea of political equality. 

Replies from: Davidmanheim
comment by Davidmanheim · 2022-01-02T12:57:48.075Z · EA(p) · GW(p)

Isn't it highly relevant to understand whether the views of the author of the piece we are discussing are consistent or not? 

No, it really, really isn't useful to discuss whether people are wrong generally to evaluate the piece.

They explicitly give a citizen's assembly as an example of a democratic procedure. Even if it is some other deliberative mechanism followed by a majority vote...

They don't suggest that the citizen's assemblies use majority voting, and in fact say that they would make recommendations and suggestions, not vote on what to do. So again, stop conflating democratic with first-past- the-post voting.

It's pretty hard to square a central role for expertise with a normative idea of political equality. 

You keep trying to push this reducto-ad-absurdum as their actual position. First, Zoe explicitly said, responding to you, "The paper never spoke about getting rid of experts or replacing experts with citzens." 

Also, are you actually saying that political equality is fundamentally incompatible with expertise? Because that's a bold and disturbing claim coming from someone who did a doctoral thesis on democracy - maybe you can cite some sources or explain?

comment by Pablo (Pablo_Stafforini) · 2021-12-29T13:39:08.070Z · EA(p) · GW(p)

nor it is germane to this discussion

I do think it is germane to the discussion, because it helps to clarify what the authors are claiming and whether they are applying their claims consistently. 

Replies from: Davidmanheim
comment by Davidmanheim · 2021-12-29T18:22:39.230Z · EA(p) · GW(p)

I was discussing this paper, which doesn't discuss climate philanthropy, not everything they have ever stated. I don't know what else they've claimed, and I'm not interested in a discussion of it. 

comment by James Ozden (JamesOz) · 2021-12-28T19:58:08.708Z · EA(p) · GW(p)

You say that decisions about which risks to take should be made democratically. The implication of this seems to be that everyone, and not just EAs, who is aiming to do good with their resources should donate only to their own government. Their govt could then decide how to spend the money democratically.

I'm not fully sure that deciding which risks to take seriously in a democratic fashion logically leads to donating all of your money to the government. Some reasons I think this:

  • That implies that we all think our governments are well-functioning democracies but I (amongst many others) don't believe that to be true. I think it's fairly common sentiment and knowledge that political myopia by politicians, vested interests and other influences mean that governments don't implement policies that are best for their populations.
  • As I mentioned in another comment, I think the authors are saying that as existential risks affect the entirety of humanity in a unique way, this is one particular area where we should be deciding things more democratically. This isn't necessarily the case for spending on education, healthcare, animal welfare, etc, so there it would make sense you donate to institutions that you believe are more effective and the bar for democratic input is lower. The quote from the paper that makes me think this is:

Tying the study of a topic that fundamentally affects the whole of humanity to a niche belief system championed mainly by an unrepresentative, powerful minority of the world is undemocratic and philosophically tenuous.

  • Thirdly,  I think this point is weaker but most political parties aren't elected by the majority of the population in the country. One cherry picked example is that only 45% of UK voters voted for the Conservative party and we only had a 67% election turnout, meaning that most of the country didn't actually vote for the winning party. It then seems odd that if you think the outcome would have been different given a higher voter turnout (closer to "true democracy"), you would give all your donations to the winning party.

Note - I don't necessarily agree with the premise we should prioritise risks democratically but I also don't think what you've said re donating all of our money to the government is the logical conclusion from that statement.

comment by anonea2021 · 2021-12-28T19:37:56.829Z · EA(p) · GW(p)

Not really. The IPCC  

(...) provides regular assessments of the scientific basis of climate change, its impacts and future risks, and options for adaptation and mitigation.

, and is not a political thinktank (even though climate risk deniers and minizers might like to claim it is), is funded at least by 65% by nation states and the UN (44% USA in 2018, 25% by  the next inheriting their democratic legitimacy  w.r.t to funding any money) and fundamentally deals with something much more narrowly defined, empirically verifiable and graspable than the TUA main causes. It suffers from a lot of the same problems w.r.t representation and democracy as all of science and society does, but it's not nearly as donor-alignment-driven as the targets of the article

comment by MatthewDahlhausen · 2021-12-28T21:27:49.790Z · EA(p) · GW(p)

The IPCC reports have hundreds of authors from all over the world: https://www.ipcc.ch/authors/.  It is misleading to say the IPCC is homogeneous and that the authors are "disproportionately white very well-educated males in the West".  Every country and a large variety of civil institutions are represented at the conference of parties, and they use a consensus process.

comment by Guy Raveh · 2021-12-29T08:38:08.310Z · EA(p) · GW(p)

The implication of this seems to be that everyone, and not just EAs, who is aiming to do good with their resources should donate only to their own government. Their govt could then decide how to spend the money democratically. Is that implication embraced? This would eg include all climate philanthropy, which is now at $5-9bn per year.

As always, I don't think this implication is necessarily bad. Individual philanthropy is not sustainable, undemocratic, and indeed might do more harm than good in that, in public perception, it takes the weight off the government's shoulder while in practice putting into it a fraction of the effort.

I don't think this is the only side of the story, of course. I donate myself to organisations who I believe do important work but aren't in the consensus. I think governments are good for making some democratic decisions, but are very undemocratic on others, e.g. by only representing the country's population and not the entire world's, or by neglecting to represent animals or future generations. And I think organisations that operate independently of the government are good for putting checks and balances on it and preventing consolidation of power.

comment by John G. Halstead (Halstead) · 2021-12-28T17:36:45.092Z · EA(p) · GW(p)

Regarding the risk that longtermism could lead people to violate rights, it seems to me like you could make exactly the same argument for any view that prioritises between different things. For instance, as Peter Singer has pointed out, billions of animals are tortured and killed every year. By exactly analogous reasoning, one could say that other problems 'dwindle into irrelevance' as other values are sacrificed at the altar of the astronomical expected value of preventing factory farming. So, this would justify animal rights terrorism and the like and other abhorrent actions

Replies from: Lukas_Gloor, lukasberglund, MichaelStJules
comment by Lukas_Gloor · 2021-12-28T21:42:54.562Z · EA(p) · GW(p)

"Don't be fanatical about utilitarian or longtermist concerns and don't take actions that violate common sense morality"  is a message that longtermists have been emphasizing from the very beginnings of this social movement, and quite a lot.

Some examples: 

More generally, there's often at least a full paragraph devoted to this topic when someone writes a longer overview article on longtermism or writes about particularly dicey implications with outsized moral stakes. I also remember this being a presentation or conversation topic at many EA conferences.

I haven't read the corresponding section in the paper that the OP refers to, yet, but I skimmed the literature section and found none of the sources I linked to above. If the paper criticizes longtermism on grounds of this sort of implication and fails to mention that longtermists have been aware of this and are putting in a lot of work to make sure people don't come away with such takes, then that seems like a major omission. 

Replies from: Halstead
comment by John G. Halstead (Halstead) · 2021-12-28T22:40:52.239Z · EA(p) · GW(p)

I also agree with this. There are many reasons for consequentialists to respect common sense morality. 

I was just making the point that the rhetorical argument about rights can pretty much be made about any moral view. eg The authors seem to believe that degrowth would be a good idea, and it is a built in feature of degrowth that it would have enormous humanitarian costs

Replies from: anonymousEA
comment by anonymousEA · 2021-12-28T22:58:49.669Z · EA(p) · GW(p)

I don't want to dip into discussions that don't directly concern the issues I created this account to discuss, but your characterisation of degrowth as having "enormous humanitarian costs" "built in" is flatly untrue in a way that is obvious to anyone who has read any degrowth literature, e.g. Kallis or Hickel.

This is not the only time you have mischaracterised democratic and ecological positions on this post, please stop.

Replies from: Halstead, Guy Raveh
comment by John G. Halstead (Halstead) · 2021-12-28T23:03:14.137Z · EA(p) · GW(p)

ok, see my comment below on covid and degrowth. It is difficult to see how we could reach a sustainable state via degrowth without shrinking the population by several billion and by reducing everyone's living standards to pre-industrial levels, i.e. most people living on <$2 per day.

Replies from: anonymousEA
comment by anonymousEA · 2021-12-28T23:18:51.092Z · EA(p) · GW(p)

It seems that you fundamentally misunderstand degrowth. For an introduction I suggest this:
https://www.annualreviews.org/doi/abs/10.1146/annurev-environ-102017-025941

Replies from: willbradshaw
comment by Will Bradshaw (willbradshaw) · 2021-12-29T08:01:46.894Z · EA(p) · GW(p)

If you (or someone else) wants to defend degrowth on the Forum, it would probably be more useful to actually make degrowth arguments, rather than linking to a polemic that isn't even trying to make an objective assessment.

Replies from: anonymousEA
comment by anonymousEA · 2021-12-29T12:06:16.648Z · EA(p) · GW(p)

I'm not sure that there are any attempted-objective assessments of degrowth (at least, not that I've found) and the post I linked provides an overview of the topic as understood by most of its key proponents. If I wanted to introduce people to EA, would it be inappropriate to offer them a copy of Doing Good Better?

I didn't make specific arguments because frankly I shouldn't need to. Someone who has written about climate change should not be making unequivocally untrue statements about basic aspects of a core strand of environmental economics. My assumption was that, given Halstead's experience, his mischaracterizations could not have been due to a lack of knowledge.

This will probably be dogpiled due to "tone" but to be honest I have rewritten this comment twice to move away from clear statements of my views towards more EA-friendly language to make it as charitable as possible. There just aren't many nice ways of saying that, well...

you see the problem? 

Replies from: Halstead
comment by John G. Halstead (Halstead) · 2021-12-29T14:14:18.465Z · EA(p) · GW(p)

I agree that I don't see anything wrong with linking to that paper. 

I do think my view is quite defensible. eg in the discussion of degrowth below, the author says "We could very plausibly stop or at least delay climate change by drastically reducing the use of technology right now (COVID bought us a few months just by shutting down planes although that has "recovered" now )" the experience of the massive global humanitarian and economic disaster of covid seems like a very poor advert for a position 'we can make degrowth work if only we try'. it's killed 15 million people and hundreds of millions of people have been locked indoors for months. 

Replies from: anonymousEA
comment by anonymousEA · 2021-12-29T15:05:10.708Z · EA(p) · GW(p)

I really don't see the link between reducing air travel and the fact that COVID killed millions of people and necessitated lockdown measures.

I'm going to disengage now. Repeatedly mischaracterizing opposing views and deploying non-sequiturs for rhetorical reasons do not indicate to me that this will be a productive conversation.

comment by Guy Raveh · 2021-12-29T09:16:55.048Z · EA(p) · GW(p)

obvious to anyone who has read any degrowth literature, e.g. Kallis or Hickel.

...is a non-argument that's both condescending and helps only a tiny fraction of people reading your comment.

Replies from: anonymousEA
comment by anonymousEA · 2021-12-29T12:15:06.334Z · EA(p) · GW(p)

See my reply to Will above. It's a fair point that it's not very helpful to spectators (besides indicating that the claim referred to should perhaps not be taken at face value) but my intention was to reply to Halstead rather than the audience.

In my view, it would be condescending if I was referring to most people, but not in this case. My point is that someone who has written about climate issues more than once in the past and who is considered something of an authority on climate issues within EA can be expected to have basic background knowledge on climate topics.

If we are going to have a hierarchical culture led by "thought leaders", I think we should at least hold them to a certain standard.

Replies from: aarongertler, aarongertler
comment by Aaron Gertler (aarongertler) · 2021-12-30T08:52:29.789Z · EA(p) · GW(p)

I think Halstead knows what degrowth advocates claim about degrowth (that it won't have built-in humanitarian costs). And I think he disagrees with them, which isn't the same as not understanding their arguments.

Imagine people arguing whether to invade Iraq in the year following the 9/11 attacks. One of them points out that invading the country will involve enormous built-in humanitarian costs. Their interlocutor replies:

"Your characterization of an Iraq invasion as having "enormous humanitarian costs" "built in" is flatly untrue in a way that is obvious to anyone who has read any Iraq invasion literature, e.g. Rumsfeld and Powell."

The second person may genuinely see Rumsfeld and Powell as experts worth listening to. The first person may see their arguments as clearly wrong, and not even worth addressing (if they think it's common sense that war will incur humanitarian costs).

The first person isn't necessarily right — in 2002, there was lots of disagreement between experts on the outcome of an Iraq invasion! — but I wouldn't conclude that their words are "flatly untrue" or that they lack "basic background knowledge".

comment by Aaron Gertler (aarongertler) · 2021-12-30T09:17:03.411Z · EA(p) · GW(p)

As a moderator: the "basic background knowledge" point is skirting the boundaries of the Forum's norms; even if you didn't intend to condescend, I found it condescending, for the reasons I note in my other reply. 

The initial comment — which claims that Halstead is misrepresenting a position, when "he understands and disagrees" is also possible — also seems uncharitable. 

I do see this charitable reading as an understandable thing to miss, given that everyone is leaving brief comments about a complex question and there isn't much context. But I also think there are ways to say "I don't think you're taking position X seriously enough" without saying "you are lying about the existence of position X, please stop lying".

Replies from: anonymousEA
comment by anonymousEA · 2021-12-30T11:14:45.189Z · EA(p) · GW(p)

But it is basic background knowledge, and that point needs to be made clear to those less familiar with the topic! This isn't an issue of understanding and disagreeing, as demonstrated by his non-sequitur about COVID if nothing else.

If, for instance, someone who has written about AI more than once argues that the Chinese government funding AI research for solely humanitarian reasons, you have two choices: they are being honest but ignorant (which is unlikely, embarrassing for them and worrying for any community that treats them as an authority) or they are being dishonest (which is bad for everyone). There is no "charitable" position here.

I understand and agree with the discourse norms here, but if someone is demonstrably, repeatedly, unequivocally acting in bad faith then others must be able to call that out.

Replies from: jackva, aarongertler
comment by jackva · 2021-12-30T16:03:51.433Z · EA(p) · GW(p)

It is basic background knowledge that degrowth literature exists (which John knows), it is not basic background knowledge that we "know" that we could implement degrowth without major humanitarian consequences as degrowth has never been demonstrated at global scale.  The opposite is not true either (so you might characterize Halstead as over-confident).

Degrowth is not a strategy we could clearly implement to tackle climate challenge (we do not know whether it is politically or techno-economically feasible and one can plausibly be quite skeptical) and we do not know  whether it could be implemented without significant humanitarian consequences, a couple of green thinkers finding it feasible and desirable is not sufficient evidence to speak of "knowing".

comment by Aaron Gertler (aarongertler) · 2022-01-01T12:16:37.282Z · EA(p) · GW(p)

If, for instance, someone who has written about AI more than once argues that the Chinese government funding AI research for solely humanitarian reasons...

I think there are a bunch of examples we could use here, which fall along a spectrum of "believability" or something like that.

Where the unbelievable end of the spectrum is e.g. "China has never imprisoned a Uyghur who wasn't an active terrorist", and the believable end of the spectrum is e.g. "gravity is what makes objects fall".

If someone argues that objects fall because of something something the luminiferous aether,  it seems really unlikely that "they have a background in physics but just disagree about gravity" is the right explanation.

If someone argues that China actually imprisons many non-terrorist Uyghurs, it seems really likely that "they have a background in the Chinese government's claims but just disagree with the Chinese government" is the right explanation.

So what about someone who argues that degrowth is very likely to lead to "enormous humanitarian costs"? How likely is it that "they have a background in the claims of Hickel et al. but disagree" is the right explanation, vs. something like "they've never read Hickel" or "they believe Hickel is right but are lying"?

Moreover, is it "basic background knowledge" that degrowth would not be very likely to lead to "enormous humanitarian costs"? 

What you think of those questions seems to depend on how you feel about the degrowth question generally. To some people, it seems perfectly believable that we could realistically achieve degrowth without enormous humanitarian costs. To other people, this seems unbelievable.

I see Halstead as being on the "unbelievable" side and you as being on the "believable" side. Given that there are two sides to the question, with some number of reasonable scholars on each side, Halstead would ideally hedge his language ("degrowth would likely have enormous humanitarian costs" rather than "built-in feature"). And you'd ideally hedge your language ("fails to address reasonable arguments from people like Hickel" rather than "flatly untrue in a way that is obvious").

*****

I cared more about your reply than Halstead's comment because, while neither person is doing the ideal hedge thing, your comment was more rude/aggressive than Halstead's.

(I could imagine someone reading his comment as insulting to the authors, but I personally read it as "he thinks the authors are deliberately making a tradeoff of one value for another" rather than "he thinks the authors support something that is clearly monstrous".)

To me, the situation reads as one person making contentious claim X, and the other saying "X is flatly wrong in a way that is obvious to anyone who reads contentious author Y, stop mischaracterizing the positions of people like author Y" — when the first person never mentioned author Y. 

Perhaps the first person should have mentioned author Y somewhere, if only to say "I disagree with them" — in this case, author Y is pretty famous for their views — but even so, a better response is "I think X is wrong because of the points made by author Y".

*****

I'd feel the same way even if someone were making some contentious statement about EA. And I hope that I'd respond to e.g. "effective altruism neglects systemic change" with something like "I think article X shows this isn't true, why are you saying this?"

I'd feel differently if that person were posting the same kinds of comments frequently, and never responding to anyone's follow-up questions or counterarguments. Given your initial comment, maybe that's how you feel about Halstead + degrowth? (Though if that's the case, I still think the burden of proof is on the person accusing another of bad faith, and they should link to other cases of the person failing to engage.)

comment by berglund (lukasberglund) · 2021-12-28T20:20:50.662Z · EA(p) · GW(p)

I agree that there is an analogy to animal suffering here, but there's a difference in degree I think. To longtermists, the importance of future generations is many orders of magnitude higher than the importance of animal suffering is to animal welfare advocates. Therefore, I would claim, longtermists are more likely to ignore other non-longtermist considerations than animal welfare advocates would be.

comment by MichaelStJules · 2021-12-29T22:35:07.921Z · EA(p) · GW(p)

Depending on the view, legitimate self-defence and "other-defence" don't violate rights at all, and this seems close to common sense when applied to protect humans. Even deontological views could in principle endorse - but I think in practice today should condemn - coercively preventing individuals from harming nonhuman animals, including farmed animals, as argued in this paper, published in the Journal of Controversial Ideas, a journal led and edited by McMahan, Minerva and Singer. Of course, this conflicts with the views of most humans today, who don't extend similarly weighty rights/claims to nonhuman animals.

EDIT: I realize now I interpreted "rights" in moral terms (e.g. deontological terms), when you may have intended it to be interpreted legally.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2021-12-29T22:57:57.188Z · EA(p) · GW(p)

The longtermist could then argue that an analogous argument applies to "other-defence" of future generations. (In case there was any need to clarify: I am not making this argument, but I am also not making the argument that violence should be used to prevent nonhuman animals from being tortured.)

Separately, note that a similar objection also applies to many forms of non-totalist longtermism. On broad person-affecting views, for instance, the future likely contains an enormous number of future moral patients who will suffer greatly unless we do something about it. So these views could also be objected to on the grounds that they might lead people to cause serious harm in an attempt to prevent that suffering.

In general, I think it would be very helpful if critics of totalist longtermism made it clear what rival view in population ethics they themselves endorse (or what distribution of credences over rival views, if they are morally uncertain). The impression one gets from reading many of these critics is that they assume the problems they raise are unique to totalist longtermism, and that alternative views don't have different but comparably serious problems. But this assumption can't be taken for granted, given the known impossibility theorems and other results in population ethics. An argument is needed.

Replies from: MichaelStJules, MichaelStJules
comment by MichaelStJules · 2021-12-30T00:49:42.730Z · EA(p) · GW(p)

I realize now I interpreted "rights" in moral terms (e.g. deontological terms), when Halstead may have intended it to be interpreted legally. On some rights-based (or contractualist) views, some acts that violate humans' legal rights to protect nonhuman animals or future people could be morally permissible.

The longtermist could then argue that an analogous argument applies to "other-defence" of future generations.

I agree. I think rights-based (and contractualist) views are usually person-affecting, so while they could in principle endorse coercive action to prevent the violation of rights of future people, preventing someone's birth would not violate that then non-existent person's rights, and this is an important distinction to make. Involuntary extinction would plausibly violate many people's rights, but rights-based (and contractualist) views tend to be anti-aggregative (or at least limit aggregation), so while preventing extinction could be good on such views, it's not clear it would deserve the kind of priority it gets in EA. See this paper, for example, which I got from one of Torres' articles and takes a contractualist approach. I think a rights-based approach could treat it similarly.

It could also be the case that procreation violates the rights of future people pretty generally in practice, and then causing involuntary extinction might not violate rights at all in principle, but I don't get the impression that this view is common among deontologists and contractualists or people who adopt some deontological or contractualist elements in their views. I don't know how they would normally respond to this.

Considering "innocent threats" complicates things further, too, and it looks like there's disagreement over the permissibility of harming innocent threats to prevent harm caused by them.

Separately, note that a similar objection also applies to many forms of non-totalist longtermism. On broad person-affecting views, for instance, the future likely contains an enormous number of future moral patients who will suffer greatly unless we do something about it. So these views could also be objected to on the grounds that they might lead people to cause serious harm in an attempt to prevent that suffering.

I agree. However, again, on some non-consequentialist views, some coercive acts could be prohibited in some contexts, and when they are not, they would not necessarily violate rights at all. The original objection raised by Halstead concerns rights violations, not merely causing serious harm to prevent another (possibly greater) harm. Maybe this is a sneaky way to dodge the objection, and doesn't really dodge it at all, since there's a similar objection. Also, it depends on what's meant by "rights".

comment by MichaelStJules · 2021-12-30T01:42:33.874Z · EA(p) · GW(p)

Also, I think we should be clear about what kinds of serious harms would in principle be justified on a rights-based (or contractualist) view. Harming people who are innocent or not threats seems likely to violate rights and be impermissible on rights-based (and contractualist) views. This seems likely to apply to massive global surveillance and bombing civilian-populated regions, unless you can argue on such views that each person being surveilled or bombed is sufficiently a threat and harming innocent threats is permissible, or that collateral damage to innocent non-threats is permissible. I would guess statistical arguments about the probability of a random person being a threat are based on interpretations of these views that the people holding them would reject, or that the probability for each person being a threat would be too low to justify the harm to that person.

So, what kinds of objectionable harms could be justified on such views? I don't think most people would qualify as serious enough threats to justify harm to them to protect others, especially people in the far future.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2021-12-30T01:48:16.574Z · EA(p) · GW(p)

This seems like a fruitful area of research—I would like to see more exploration of this topic. I don't think I have anything interesting to say off the top of my head.

comment by slg (Simon_Grimm) · 2021-12-29T09:26:30.827Z · EA(p) · GW(p)

@CarlaZoeC [EA · GW] or Luke Kemp, could you create another forum post solely focused on your article? This might lead to more focused discussions, separating debate on community norms vs discussing arguments within your piece.

I also wanted to express that I'm sorry this experience has been so stressful. It's crucial to facilitate internal critique of EA, especially as the movement is becoming more powerful, and I feel pieces like yours are very useful to launch constructive discussions.

comment by ab (Avital Balwit) · 2021-12-28T21:40:34.824Z · EA(p) · GW(p)

Hey Zoe and Luke, thank you for posting this and for writing the paper! I just finished reading it and found it thoughtful, detailed, and it gave me a lot to think about. It is the best piece of criticism I have read, and will recommend it to others looking for that going forward. I can see the care, time, and revisions that went into the piece. I am very sorry to hear about your experience of writing it. I think you contributed something important, and wish you had been met with more support. I hope the community can read this post and learn from it so we can get a little closer to that ideal of how to handle, incorporate, and respond to criticism. 

Replies from: aarongertler
comment by Aaron Gertler (aarongertler) · 2021-12-29T09:50:40.141Z · EA(p) · GW(p)

Note: I discuss Open Phil to some degree in this comment. I also start work there on January 3rd. These are my personal views, and do not represent my employer.

Epistemic status: Written late at night, in a rush, I'll probably regret some of this in the morning but (a) if I don't publish now, it won't happen, and (b) I did promise extra spice [EA · GW] after I retired.

I think you contributed something important, and wish you had been met with more support. 

It seems valuable to separate "support for the action of writing the paper" from "support for the arguments in the paper". My read is that the authors had a lot of the former, but less of the latter.

From the original post:

We were told by some that our critique is invalid because the community is already very cognitively diverse and in fact welcomes criticism. They also told us that there is no TUA, and if the approach does exist then it certainly isn’t dominant. 

While "invalid" seems like too strong a word for a critic to use (and I'd be disappointed in any critic who did use it), this sounds like people were asked to review/critique the paper and then offered reviews and critiques of the paper. 

Still, to the degree that there was any opposition for the action of writing the paper, that's a problem. To address something more concerning:

It was these same people that then tried to prevent this paper from being published. They did so largely out of fear that publishing might offend key funders who are aligned with the TUA. 

These individuals—often senior scholars within the field—told us in private that they were concerned that any critique of central figures in EA would result in an inability to secure funding from EA sources, such as Open Philanthropy. We don't know if these concerns are warranted. Nonetheless, any field that operates under such a chilling effect is neither free nor fair. 

I'm not sure what "prevent this paper from being published" means, but in the absence of other points, I assume it refers to the next point of discussion (the concern around access to funding).

I'm glad the authors point out that the concerns may not be warranted. But I've seen many people (not necessarily the authors) make arguments like "these concerns could be real, therefore they are real". There's a pervasive belief that Open Philanthropy must have a specific agenda they try to fund where X-risk is concerned, and that entire orgs might be blacklisted because individual authors within those orgs criticize that agenda.

The Future of Humanity Institute (one author's org) has dozens of researchers and has received a consistent flow of new grants from Open Phil. Based on everything I've ever seen Open Phil publish, and my knowledge of FHI's place in the X-risk world, it seems inconceivable that they'd have funding cut because of a single paper that presents a particular point of view. 

The same point applies beyond FHI, to other Open Phil grants. They've funded dozens of organizations in the AI field, with (I assume) hundreds of total scholars/thinkers in their employ; could it really be the case that at the time those grants were made, none of the people so funded had written things that ran counter to Open Phil's agenda (for example, calls for greater academic diversity within X-risk)?

Meanwhile, CSER (the other author's org) doesn't appear in Open Phil's grants database at all, and I can't find anything that looks like funding to CSER online at any point after 2015. If you assume this is related to ideological differences between Open Phil and CSER (I have no idea), this particular paper seems like it wouldn't change much. Open Phil can't cut funding it doesn't provide.

That is to say, if senior scholars expressed these concerns, I think they were unwarranted.

*****

Of course, I'm not a senior scholar myself. But I am someone who worked at CEA for three years, attended two Leaders Forums, and heard many internal/"backroom" conversations between senior leaders and/or big funders.

I'm also someone who doesn't rely on the EA world for funding (I have marketable skills and ample savings), is willing to criticize popular people even when it costs time and energy [EA(p) · GW(p)], and cares a lot about getting incentives and funding dynamics right. I created several of the Forum's criticism tags and helped to populate them.  I put Zvi's recent critical post in the EA Forum Digest.

I think there are things we don't do well. I've seen important people present weak counterarguments to good criticism without giving the questions as much thought as seemed warranted. I've seen interesting opportunities get lost because people were (in my view) too worried about the criticism that might follow. I've seen the kinds of things Ozzie Gooen talks about here (humans making human mistakes in prioritization, communication, etc.) I think that Ben Hoffman and Zvi have made a number of good points about problems with centralized funding and bad incentives.

But despite all that, I just can't wrap my head around the idea that the major EA figures I've known would see a solid, well-thought-through critique and decide, as a result, to stop funding the people or organizations involved. It seems counter to who they are as people, and counter to the vast effort they expend on reading criticism, asking for criticism, re-evaluating their own work and each other's work with a critical eye, etc.

I do think that I'm more trusting of people than the average person. It's possible that things are happening in backrooms that would appall me, and I just haven't seen them. But whenever one of these conversations comes up, it always seems to end in vague accusations without names attached or supporting documentation, even in cases where someone straight-up left the community. If things were anywhere near as bad as they've been represented, I would expect at least one smoking gun, beyond complaints about biased syllabi or "A was concerned that B would be mad".

For example: Phil Torres claims to have spent months gathering reports of censorship from people all over EA, but the resulting article was remarkably insubstantial. The single actual incident he mentions in the "canceled" section is a Facebook post being deleted by an unknown moderator in 2013. I know more detail about this case than Phil shares, and he left out some critical points:

  • The post being from 2013, when EA as a whole was much younger/less professional
  • The CEA employee who called the poster being a personal friend of theirs who wanted to talk about the post's ideas
  • The person who took down the post seeing this as a mistake, and something they wouldn't do today (did Phil try to find them, so he could ask them about the incident?)

If this was Phil's best example, where's the rest?

I'd be sad to see a smoking gun because of what it would mean for my relationship with a community I value. But I've spent a lot of time trying to find one anyway, because if my work is built on sand I want to know sooner rather than later. I've yet to find what I seek.

*****

There was one line that really concerned me:

By others we were accused of lacking academic rigour and harbouring bad intentions. 

"Lacking in rigor" sounds like a critique of the type the authors solicited (albeit one that I can imagine being presented unhelpfully).

"Harboring bad intentions" is a serious accusation to throw around, and one I'm actively angry to hear reviewers using in a case like this, where people are trying to present (somewhat) reasonable criticism and doing so with no clear incentive (rather than e.g. writing critical articles in outside publications to build a portfolio, as others have). 

I'd rather have meta-discussion of the paper's support be centered on this point, rather than the "hypothetical loss of funding" point, at least until we have evidence that the concerns of the senior scholars are based on actual decisions or conversations.

Replies from: CarlaZoeC
comment by CarlaZoeC · 2021-12-29T10:54:25.671Z · EA(p) · GW(p)

This is a great comment, thank you for writing it. I agree - I too have not seen sufficient evidence that could warrant the reaction of these senior scholars. We tried to get evidence from them and tried to understand why they explicitly feared that OpenPhil would not fund them because of some critical papers. Any arguments they shared with us were unconvincing. My own experience with people at OpenPhil (sorry to focus the conversation only on OpenPhil, obviously the broader conversation about funding should not only focus on them) in fact suggests the opposite. 

I want to make sure that the discussion does not unduly focus on FHI or CSER in particular. I think this has little to do with the organisations as a whole and more to do with individuals who sit somewhere in the EA hierarchy. We made the choice to protect the privacy of the people whose comments we speak of here. This is out of respect but also because I think the more interesting area of focus (and that which can be changed) is what role EA as a community plays in something like this happening. 

I woud caution to center the discussion only around the question of whether or not OpenPhil would reduce funding in respose to crticism. Improtantly, what happened here is that some EAs, who had influence over funding and research postions of more junior researchers in Xrisk, thought this was the case and acted on that assumption. I think it may well be the case that OpenPhil acts perfectly fine, while researchers lower in the funding hierarchy harmfully act as if OpenPhil would not act well. How can that be prevented? 

To clarify: all the reviewers we approached gave critical feedback and we incorporated it and responded to it as we saw fit without feeling offended. But the only people who said the paper had a low academic standard, that it was 'lacking in rigor' or that it wasn't 'loving enough', were EAs who were emotionally affected by reading the paper. My point here is that in an idealised objective review process it would be totally fine to hear that we do not meet some academic standard. But my sense was that these comments were not actually about academic rigor, but about community membership and friendship. This is understandable, but it's not surprising that mixture of power, community and research can produce biased scholarship. 

Very happy to have a private chat Aaron!

Replies from: Charles He, willbradshaw
comment by Charles He · 2021-12-29T19:19:03.050Z · EA(p) · GW(p)

It's hard to see what is going on and this is producing a lot of heat and speculation. I want to present an account or framing below. I want to see if this matches with your beliefs and experiences. 

I like to point out the below this isn’t favorable to you, basically, but I don't have any further deliberate goal and little knowledge in this space.

 

Firstly, to try to reduce the heat, I will change the situation to the cause area of global health:

For background, note that Daron Acemoglu, who is really formidable, has criticized EA global health and development.
 

 

Basically, Acemoglu believes the randomista approaches used in EA could be net negative for human welfare because it supplants health institutions, reduces the functionality of the state and slows economic development. It’s also hard to measure. I don’t agree and most EA don’t agree. 

 

The account begins: imagine with dramatically increased funding, Givewell expands and hires a bunch of researchers. GiveWell is more confident and hires less orthodox researchers that seem passionate and talented.

One year later, the very first paper of one of the newly hired researchers makes a strong negative view of the randomista approach and directly criticizes GiveWell’s work. 

The paper says EA global health and development is misguided and gives plausible reasons, but these closely follow Acemoglu and randomista critics. The paper makes statements that many aligned EAs find disagreeable, such as saying AMF’s work is unmeasurable. There are also direct criticisms of senior EAs that seem uncharitable. 

However, there isn’t a lot of original research or claims in the paper. Also, while not stated, the paper implies the need to restructure and change the fundamental work of GiveWell, including deleting several major programs.

Accompanying the paper, the new researcher also states they had very negative experiences when pushing out the paper, including getting heavily pressured to self-censor. They state people had suggested they had bad intentions, low scholarship ability, and that people said future funding might be pulled. 

They state this too on the EA Forum.

Publicly, we never hear any more substantive details about the above. This is because people don’t want to commit to writing when it’s easy to misrepresent the facts on either side, and certain claims make the benign appeal to authority and norms unworkable.

 

 

However, the truth of what happened is prosaic

When the researcher was getting reviews, peer and senior EAs in the space pointed out that the researcher joined GiveWell knowing full well its mission and approaches, and their paper seemed mainly political, simply drawing in and recasting existing outside arguments. Given this, some explicitly questioned the intent and motivation of the researcher.

The director of Givewell research hired the researcher because the director herself wanted to push the frontier of EA global health and development into new policy approaches, maybe even make inroads to people doing work  advanced by Acemoglu. Now, her newly hired researcher seems to be a wild activist. It is a nightmare communicating with them. Frustrated, the director loses sleep and doubts herself, was this her fault and incompetence? 

The director knows that saying things to the researcher, like they seem unable to do original research, have no value alignment to EA approaches, or that the researcher’s path has no future in GiveWell, seem true to the director, but can make her a lifelong enemy. 

The director is also unwilling or unable to be a domineering boss over an underling. 

So the director punts by saying that GiveWell’s funding is dependent on executing it’s current mission, and papers directly undermining the current mission will undermine funding too.

This all happens over many meetings and days, where both sides are heated and highly emotional, and many things are said.

The researcher is a true believer against randomista, and think that millions of lives are at stake, and definitely don’t think they are unaligned (it is GiveWell that is). The researcher views all the above as hostile, a reaction of the establishment.

 

Question: Do you find my account above plausible, or unfair and wildly distorted? Can you give any details or characterizations of how it differs?

Replies from: None
comment by [deleted] · 2022-04-30T00:16:57.865Z · EA(p) · GW(p)

What on Earth is this thinly-veiled attempt at character assassination? Do you actually have any substantive evidence that your “account” is accurate (your disclaimer at the start suggests not), or are you just fishing for a reaction?

Honestly, what did you hope to gain from this? You think this researcher is just gonna respond and say “Yep, you’re right, I’m ill-fit for my job and incapable of good academic work!” “Not favorable to you” is the understatement of the century. Not to mention your change of the area concerned in no way lowers the temperature. It just functions as CYA to avoid making forthright accusations that this researcher’s actual boss might then be called upon to publicly refute. This is one of the slimiest posts I’ve ever seen, to be perfectly honest.

Edit: I would love to see anyone who has downvoted this post explain why they think the above is defensible and how they’d react if someone did it to them.

Replies from: Charles He
comment by Charles He · 2022-04-30T00:40:04.169Z · EA(p) · GW(p)

Nah

Replies from: None
comment by [deleted] · 2022-04-30T00:58:06.165Z · EA(p) · GW(p)

Nah what? Nah you don’t have any evidence? That would confirm my prior.

Now why don’t you explain what you hoped to get out of that comment besides being grossly insulting to someone you don’t know on no evidential basis.

Replies from: Charles He
comment by Charles He · 2022-04-30T01:08:09.571Z · EA(p) · GW(p)

I don't agree with your comment on its merits, or the practice of confronting this way with an anonymous throwaway. 

(It's unclear, but it may be unwise and worthwhile for me to think about the consequent effects of this attitude) but it seems justifiable that throwaways that begin this quality sort of debate (the quality being a matter of perspective that you won't agree upon) can be treated really dismissively.

If you want, you can write with your real name (or PM me) and I will respond, if that's what you really want. 

Also, the downvote(s) on your comment(s) are mine.

Replies from: None, Linch
comment by [deleted] · 2022-04-30T01:22:16.543Z · EA(p) · GW(p)

I would be more worried about making comments of the kind that you produced above under my real name. Your comment was full of highly negative accusations about a named poster’s professional life and academic capabilities, veiled under utterly transparent hypotheticals, made in public. You offered no evidence whatsoever for any of these accusations nor did you even attempt to justify why that sort of engagement was warranted. Airing such negative judgments publicly and about a named person is an extremely serious matter whether you have evidence or not. I don’t think that you treated it with a fraction of the seriousness it deserves.

I honestly have negative interest in telling you my real name after seeing how you treat others in public, much less making an account here with my real name attached to it. I would prefer to limit your ability to do reputational damage (to me or others) on spurious or non-existent grounds as far as possible. I am honestly extremely curious as to why you thought what you did above was remotely acceptable, but I am not willing to put myself in the line of fire to find out.

Replies from: Charles He
comment by Charles He · 2022-04-30T01:24:08.965Z · EA(p) · GW(p)

I think that you think I don't like your comments, but this isn't close to true.

I really hope you will put your real name so I can give a real response.

(I wouldn't share your name and generally wouldn't use PII if you PMed me.)

Replies from: None
comment by [deleted] · 2022-04-30T01:38:21.476Z · EA(p) · GW(p)

Well, thanks for that. Admittedly, the downvotes seemed like good evidence to the contrary.

Unfortunately, I also couldn’t really give you my real name even if I wanted to, because the name of this account shares the name of my online persona elsewhere and I place a very high premium on anonymity. If I had thought to give it a different name, then I’d probably just PM you my real name. But I didn’t think that far ahead.

Anyway, whatever else may be, I’m sorry that I came in so hot. Sometimes I just see something that really sets me off and I consequently approach things too aggressively for my own (and others’) good.

Replies from: Charles He
comment by Charles He · 2022-05-03T15:37:36.466Z · EA(p) · GW(p)

Many of the comments in this comment chain, including the original narrative I wrote [EA(p) · GW(p)], which I view as closer to reality (as opposed to the implicit, difficult, narrative I see in the OP, which seems highly motivated and for which I find evidence that it is contradicted in the subsequent comments by the OP) has been visited by likely a single person, who has made strong downvotes and strong upvotes, of magnitude 9. 

 

So probably a single person has come in and used a strong upvote or downvote of magnitude 9.

While I am totally petty and vain, I don't usually comment on upvotes or downvotes, because it seems sort of unseemly (unless it is hilarious to do so).

In this case, because of the way strong upvotes are designed, there appears to be literally only 4 accounts who could have this ability, and their judgement is well respected.

 

So I address you directly: If you have information about this, especially object level information about the underlying situation relative to my original narrative,  it would be great to discuss.

The underlying motivation is that truth is a thing, and in some sense having the recent commentor come in and stir this up, was useful.

Replies from: Charles He
comment by Charles He · 2022-05-03T15:37:59.594Z · EA(p) · GW(p)

In an even deeper sense, as we all agree, EA isn't a social club for people who got here first. EA doesn't belong to you or me, or even a large subset of the original founders (to be clear, for all intents and purposes, all reasonable paths will include their leadership and guidance for a long time).

Importantly, I think some good operationalizations of the above perspective, combined with awareness of all the possible instantiations of EA, and the composition of people and attitudes, would rationalize a different tone and culture than exists. 

So,  RE: "I would be more worried about making comments of the kind that you produced above under my real name." I think could be exactly, perfectly, the opposite of what is true, yet is one of the comments you strong upvoted.

 

To be even more direct I suspect, but I am unsure, that the culture of discussion in EA has accumulated defects that are costly to effectiveness and truth (under the direct tenure of one of the four people who could have voted +/-9  by the way). 

So the most important topic here might not be about the OP at all, which I view as just one instance of an ongoing issue—in a deep sense, it was really about the very person who came in and strong voted!

I'm not sure you see this (or that I see this fully either). 

From the very beginning, I specifically constructed this account and persona to interrogate whether this is true, or something.

Replies from: Charles He
comment by Charles He · 2022-05-03T16:27:29.966Z · EA(p) · GW(p)

Circling back to the original topic. The above perspective, the related hysteresis, the consequent effects, implies that the existence of my narrative in this thread, or myself, should be challenged or removed if it's wrong. 

But I can't really elaborate on my narrative [EA(p) · GW(p)]. I can't defend myself, because it slags the OP, which isn't appropriate and opens wounds, which is unfair and harmful to everyone involved (but I sort of hoped the new commentor was the OP or a friend, which would have waived this and that's why I wanted their identity).

But you, the strong downvoter/upvoter, +9 dude, this is a really promising line of discussion. So come and reply? 

comment by Linch · 2022-04-30T04:18:22.222Z · EA(p) · GW(p)

I think it's reasonable to not want to respond to an anonymous throwaway, but not reasonable to ask them to PM you their real name.

Replies from: Charles He
comment by Charles He · 2022-04-30T08:07:43.453Z · EA(p) · GW(p)

So, there is some normal sense where I might have a reason to want to them "legitimize" their criticism by identifying themselves (this reason is debatable, it could be weak or very strong).

 

But the first comments from this person aren't just vitriolic and a personal attack, they are adamant demands for a significant amount of writing—they disagree greatly with me and so the explanation needed to bridge the opinion could be very long. 

The content of this writing has consequences, which is hidden to people without the explanation.

Here, I have special additional reasons to know their identity, because the best way to communicate the underlying events and what my comment meant, depend on who they are. 

Some explanations or accounts will be inflammatory, and others useless. For example, the person could be entirely new to EA, or be the OP themselves. Certain explanations, justification or “evidence”, could be hurtful and stir up wounds. Others won't make sense at all. 

 

In this situation, it’s reasonable to see the commenters demands impose the further, additional burdens on me of having to weigh this harm (just to defend my comment), which is hidden from them. Separately and additionally, I probably view this as particularly unfair, as from my perspective, I think the very reason/issue why I commented and why things are so problematic/sensitive, was because the original environment around the post was inflammatory and hard to approach by design.

Replies from: Linch
comment by Linch · 2022-04-30T13:13:37.188Z · EA(p) · GW(p)

Hmm I think I have some different ideas about discussion norms but not sure if I understand them coherently myself/think it's worth going into. I agree it's often worthwhile to not engage.

I think the very reason/issue why I commented and why things are so problematic/sensitive, was because the original environment around the post was inflammatory and hard to approach by design.

I agree with this. 

comment by Will Bradshaw (willbradshaw) · 2021-12-29T11:44:01.345Z · EA(p) · GW(p)

Thanks for writing this reply, I think this is an important clarification.

comment by Vanessa · 2021-12-30T10:58:43.698Z · EA(p) · GW(p)

Points where I agree with the paper:

  • Utilitarianism is not any sort of objective truth, in many cases it is not even a good idea in practice (but in other cases it is).
  • The long-term future, while important, should not completely dominate decision making.
  • Slowing down progress is a valid approach to mitigating X-risk, at least in theory.

Points where I disagree with the paper:

  • The papers argues that "for others who value virtue, freedom, or equality, it is unclear why a long-term future without industrialisation is abhorrent". I think it is completely clear, given that in pre-industrial times most people lived in societies that were rather unfree and unequal (harder to say about "virtue" since different people would argue for very different conceptions of what virtue is). Moreover, although intellectuals argued for all sorts of positions (words are cheap, after all), few people are trying to return to pre-industrial life in practice. Finally, techno-utopian visions of the future are usually very pro-freedom and are entirely consistent with groups of people voluntarily choosing to live in primitivist communes or whatever.
  • If ideas are promoted by an "elitist" minority, that doesn't automatically imply anything bad. Other commenters have justly pointed out that many ideas that are widely accepted today (e.g. gender equality, religious freedom, expanding suffrage) were initially promoted by elitist minorities. In practice, X-risk is dominated by a minority since they are the people who care most X-risk. Nobody is silencing the voices of other people (maybe the authors would disagree, given their diatribe in this post, but I am skeptical).
  • "Democratization" is not always a good approach. Democratic decision processes are often dominated by tribal virtue-signaling (simulacrum levels 3/4), because from the perspective of every individual participant using their voice for signaling is much more impactful than using it for affecting the outcome (a sort of tragedy of the commons). I find that democracy is good for situations that are zero-sum-ish (dividing a pie), where abuse of power is a major concern, whereas for situation that are cooperative-ish (i.e. everyone's interests are aligned), it is much better to use meritocracy. That is, set up institutions that give more stage to good thinkers rather than giving an equal voice to everyone. X-risk seems much closer to the latter than to the former.
  • If some risk is more speculative, that doesn't mean we should necessarily allocate it less resources. "Speculativeness" is a property of the map, not the territory. A speculative risk can kill you just as well as a non-speculative risk. The allocation of resources should be driven by object-level discussion, not by a meta-level appeal to "speculativeness" or "consensus".
  • Because, unfortunately, we do not have consensus among experts about AI risk, talking about moratoria on AI seems impractical.  With time, we might be able to build such a consensus and then go for a moratorium, although it is also possible we don't have enough time for this.
  • This is a relatively minor point, but there is some tension between the authors calling for stopping the development of dangerous technology while also strongly rejecting the idea of government surveillance. Clearly, imposing a moratorium on research requires some infringement on personal freedoms. I understand the authors' argument as something like: early moratoria are better since they require less drastic measures. This is probably true but the tension should be acknowledged more.
comment by Larks · 2021-12-28T19:35:22.055Z · EA(p) · GW(p)

I enjoyed some of the discussion of emergency powers. It could be good to mention the response to covid. Leaving to one side whether such policies were justified (they do seem to have saved many lives), country-wide lockdowns were surely one of the most illiberal policies enacted in history, and explicitly motivated by trying to address a global disaster. Outside of genocide and slavery, I struggle to think of many greater restrictions on individuals freedom than confining essentially the entire population to semi house arrest. In many cases these rules were brought in under special emergency powers, and sometimes later determined to be illegal after judicial review. However, these policies were often extremely popular with the general population, so I'm not sure they fit the democracy-vs-illiberalism dichotomy the article is sort of going for.

comment by RAB · 2021-12-30T05:30:17.331Z · EA(p) · GW(p)

EDIT: See Ben's comment in the thread below on his experience as Zoe's advisor and confidence in her good intentions.

(Opening disclaimer: this was written to express my honest thoughts, not to be maximally diplomatic. My response is to the post, not the paper itself.)

I'd like to raise a point I haven't seen mentioned (though I'm sure it's somewhere in the comments). EA is a very high-trust environment, and has recently become a high-funding environment. That makes it a tempting target for less intellectually honest or pro-social actors.

If you just read through the post, every paragraph except the last two (and the first sentence) is mostly bravery claims (from SSC's "Against Bravery Debates"). This is a major red flag for me reading something on the internet about a community I know well. It's much easier to start an online discussion about how you're being silenced than to defend your key claims on the merits.  Smaller red flags were: explicit warnings of impending harms if the critique is not heeded, and anonymous accounts posting mostly low-quality comments in support of the critique (shoutout to "AnonymousEA"). 

A lot of EAs have a natural tendency to defend someone who claims they're being silenced, and give their claims some deference to avoid being uncharitable. And it's pretty easy to exploit that tendency.

I don't know Zoe, and I don't want to throw accusations of exaggeration or malfeasance into the ring without cause. If these incidents occurred as described, the community should be extremely concerned. But on priors, I expect a lot of claims along these lines, i.e. "Please fund my research if you don't want to silence criticism" to come from a mix of unaligned academics hoping to do their own thing with LTist funding, and less scrupulous Phil Torres-style actors. 

Yes, I'm leaving myself more vulnerable to a world where LTist orgs do in fact silence criticism and nobody hears about it except from brave lone researchers. But I'd like to see more evidence in support of that case before everyone gets too worried. 

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2021-12-30T06:37:40.164Z · EA(p) · GW(p)

I believe these are authors already working at EA orgs, not "brave lone researchers" per se.

Replies from: RAB
comment by RAB · 2021-12-31T20:08:45.079Z · EA(p) · GW(p)

Thanks - I meant "lone" as in one or two researchers raising these concerns in isolation, not to say they were unaffiliated with an institution. 

I'm not familiar with Zoe's work, and would love to hear from anyone who has worked with them in the past. After seeing the red flags mentioned above,  and being stuck with only Zoe's word for their claims, anything from a named community member along the lines of "this person has done good research/has been intellectually honest" would be a big update for me.

And since I've stated my suspicions, I apologize to Zoe if their claims turn out to be substantiated. This is an extremely important post if true, although I remain skeptical.

In particular, a post of the form: 

I have written a paper (link). 

(12 paragraphs of bravery claims)

(1 paragraph on why EA is failing)

(1 paragraph call to action)

Strikes me as being motivated not by a desire to increase community understanding of an important issue, but rather to generate sympathy for the authors and support for their position by appealing to justice and fairness norms. The other explanation is that this was a very stressful experience, and the author was simply venting their frustrations. 

But I'd hope that authors publishing an important paper wouldn't use its announcement solely as an opportunity for venting, rather than a discussion of the paper and its claims. Whereas that choice makes sense if the goal is to create sympathy and marshal support without needing to defend your object-level argument.

Replies from: bmg
comment by Ben Garfinkel (bmg) · 2022-01-01T05:30:31.300Z · EA(p) · GW(p)

I'm not familiar with Zoe's work, and would love to hear from anyone who has worked with them in the past. After seeing the red flags mentioned above,  and being stuck with only Zoe's word for their claims, anything from a named community member along the lines of "this person has done good research/has been intellectually honest" would be a big update for me…. [The post] strikes me as being motivated not by a desire to increase community understanding of an important issue, but rather to generate sympathy for the authors and support for their position by appealing to justice and fairness norms. The other explanation is that this was a very stressful experience, and the author was simply venting their frustrations.

(Hopefully I'm not overstepping; I’m just reading this thread now and thought someone ought to reply.)

I’ve worked with Zoe and am happy to vouch for her intentions here; I’m sure others would be as well. I served as her advisor at FHI for a bit more than a year, and have now known her for a few years. Although I didn’t review this paper, and don’t have any detailed or first-hand knowledge of the reviewer discussions, I have also talked to her about this paper a few different times while she’s been working on it with Luke.

I’m very confident that this post reflects genuine concern/frustration; it would be a mistake to dismiss it as (e.g.) a strategy to attract funding or bias readers toward accepting the paper’s arguments. In general, I’m confident that Zoe genuinely cares about the health of the EA and existential risk communities and that her critiques have come from this perspective.

Replies from: RAB
comment by RAB · 2022-01-04T20:49:26.500Z · EA(p) · GW(p)

Thanks Ben! That's very helpful info. I'll edit the initial comment to reflect my lowered credence in exaggeration or malfeasance.

comment by Mauricio · 2021-12-29T12:27:02.988Z · EA(p) · GW(p)

Thanks for sharing this! Responding to just some parts of the object-level issues raised by the paper (I only read parts closely, so I might not have the full picture)--I find several parts of this pretty confusing or unintuitive:

  • Your first recommendation in your concluding paragraph is: "EA needs to diversify funding sources by breaking up big funding bodies." But of course "EA" per se can't do this; the only actors with the legal authority to break up these bodies (other than governments, which I'd guess would be uninterested) are these funding bodies themselves, i.e. mainly OpenPhil. Given the emphasis on democratization and moral uncertainty, it sounds like your first recommendation is a firm assertion that two people with lots of money should give away most of their money to other philanthropists who don't share their values, i.e. it's a recommendation that obviously won't be implemented (after all, who'd want to give influence to others who want to use it for different ends?). So unless I've misunderstood, this looks like there might be more interest in emphasizing bold recommendations than in emphasizing recommendations that stand a chance of getting implemented. And that seems at odds with your earlier recognition, which I really appreciate--that this is not a game. Have I missed something?
  • Much of the paper seems to assume that, for moral uncertainty reasons, it's bad for the existential risk research community to be unrepresentative of the wider world, especially in its ethical views. I'm not sure this is a great response to moral uncertainty. My intuition would be that, under moral uncertainty, each worldview will do best (by its own lights) if it can disproportionately guide the aspects of the world it considers most important. This suggests that all worldviews will do best (by their own lights) if [total utiliarianism + strong longtermism + transhumanism]* retains over-representation in existential risk research (since this view cares about this niche field to an extremely unusual extent), while other ethical views retain their over-representation in the many, many other areas of the world that entirely lack these longtermists. These disproportionate influences just seem like different ethical communities specializing differently, to mutual benefit. (There's room to debate just how much these ethical views should concentrate their investments, but if the answer is not zero, then it's not the case that e.g. the field having "non-representative moral visions of the future" is a "daunting problem" for anyone.)

*I don't use your term "techno-utopian approach" because "utopian" has derogotary connotations, not to mention misleading/inaccurate connotations re: these researchers' typical levels of optimism regarding technology and the future.

Replies from: Mauricio, jchen1
comment by Mauricio · 2021-12-30T20:05:12.403Z · EA(p) · GW(p)

Other thoughts:

  • Some other comment hinted at this: another frame that I'm not sure this paper considers is that non-strong-longtermist views are in one sense very undemocratic--they drastically prioritize the interests of very privileged current generations while leaving future generations disenfranchised, or at least greatly under-represented (if we assume there'll be many future people). So characterizing a field as being undemocratic due to having longtermism over-represented sounds a little like calling the military reconstruction that followed the US civil war (when the Union installed military governments in defeated Southern states to protect the rights of African Americans) undemocratic--yes, it's undemocratic in a sense, but there's also an important sense in which the alternative is painfully undemocratic.
    • How much we buy my argument here seems fairly dependent on how much we buy (strong) longtermism. It's intuitive to me that (here and elsewhere) we won't be able to fully answer "to what extent should certain views be represented in this field?" without dealing with the object-level question "to what extent are these views right"? The paper seems to try to side-step this, which seems reasonably pragmatic but also limited in some ways.
    • I think there's a similarly plausible case for non-total-utilitarian views being in a sense undemocratic; they tend to not give everyone equal decision-making weight. So there's also a sense in which seemingly fair representation of these other views is non-democratic.
      • As a tangent, this seems closely related to how a classic criticism of utilitarianism--that it might trample on the few for the well-being of a majority--is also an old criticism of democracy (which is a little funny, since the paper both raises these worries with utilitarianism and gladly takes democracy on board, although that might be defensible.)
  • One thing I appreciate about the paper is how it points out that the ethically loaded definitions of "existential risk" make the scope of the field dependent on ethical assumptions--that helped clarify my thinking on this.
comment by jchen1 · 2022-02-05T02:28:59.975Z · EA(p) · GW(p)

Re your second point, a counter would be that the implementation of recommendations arising from ERS will often have impacts on the population around at the time of implementation, and the larger those impacts are the less possible specialization seems. E.g. if total utilitarians/longtermists were considering seriously pursuing the implementation of global governance/ubiquitous surveillance, this might risk such a significant loss of value to non-utilitarian non-longtermists that it's not clear total utilitarians/longtermists should be left to dominate the debate.

Replies from: Mauricio
comment by Mauricio · 2022-02-05T06:06:11.904Z · EA(p) · GW(p)

I mostly agree. I'm not sure I see how that's a counter to my second point though. My second point was just that (contrary to what the paper seems to assume) some amount of ethical non-representativeness is not in itself bad:

There's room to debate just how much these ethical views should concentrate their investments, but if the answer is not zero, then it's not the case that e.g. the field having "non-representative moral visions of the future" is a "daunting problem" for anyone.

Also, if we're worried about implementation of large policy shifts (at least, if we're worried about this under "business as usual" politics), I think utilitarians/longtermists can't and won't actually dominate the debate, because policymaking processes in modern democracies by default engage a large and diverse set of stakeholders. (In other words, dominance in the internal debates of a niche research field won't translate into dominance of policymaking debates--especially when the policy in question would significantly affect many people.)

comment by PeterSlattery (Peterslattery) · 2021-12-29T00:43:47.577Z · EA(p) · GW(p)

Quick thoughts from my phone:

Thanks for writing this post, Carla and Luke. I am sorry to hear about your experiences, that sounds very challenging.

I also understand why people would object to your work, as many may have had high confidence in it having negative expected value.

It was surely a very difficult situation for all parties.

I am glad you are voicing concerns, I like posts like this.

At the same time, what occurred mostly sounded reasonable to me, even if it was unpleasant. Strong opinions were expressed, concerns were made salient, people may have been defensive or acted with some self-interest, but no one was forced to do anything. Now the paper and your comments are out, and we can read and react to them. I have heard much worse in other academic and professional settings.

I think that it's unavoidable that there will be a lot of strong disagreement in the EA community. It seems unavoidable in any group of diverse individuals who are passionately working together towards important goals. Of course, we should try to handle conflict well, but we shouldn't expect that it can ever be avoided or be completely pleasant.

I also understand why people don't express criticism publicly, both in EA and outside. It's probably not ideal, but it's a pretty reasonable failing for a community to have some sacred values/views and for people in that community to be hesitant about challenging them. It's something I'd like to see improve, but not something that I see as a major issue in the EA community. I seek out criticism of EA all the time and then read it when I find it and update. I have had much worse experience in other communities.

Finally, I read the paper. Thank you for caring enough about us and our future to write and publish it. I look forward to seeing the response and having further updates.

[Edit: I want to communicate that I am uncertain in the views I expressed above. I would welcome push back - maybe I am missing something.]

Replies from: freedomandutility, jsteinhardt
comment by freedomandutility · 2021-12-29T03:17:34.638Z · EA(p) · GW(p)

I agree with most of what you say other than it being reasonable for some people to have acted in self-interest. 

While I do think it is unavoidable that there will be attempts to shut down certain ideas and arguments out of the self-interest of some EAs, I think it's important that we have a very low tolerance of this.

Replies from: Peterslattery
comment by PeterSlattery (Peterslattery) · 2021-12-29T03:46:27.857Z · EA(p) · GW(p)

Thanks for commenting :)

I agree with most of what you say other than it being reasonable for some people to have acted in self-interest. 

I intended to present the-self interest part as bad, sorry.  

While I do think it is unavoidable that there will be attempts to shut down certain ideas and arguments out of the self-interest of some EAs, I think it's important that we have a very low tolerance of this.


I agree, but I don't see this as 'shutting down' arguments. Can I just check that I am not misreading what happened?

We were told by some that our critique is invalid because the community is already very cognitively diverse and in fact welcomes criticism. They also told us that there is no TUA, and if the approach does exist then it certainly isn’t dominant.  It was these same people that then tried to prevent this paper from being published. They did so largely out of fear that publishing might offend key funders who are aligned with the TUA. 

These individuals—often senior scholars within the field—told us in private that they were concerned that any critique of central figures in EA would result in an inability to secure funding from EA sources, such as OpenPhilanthropy. We don't know if these concerns are warranted. Nonetheless, any field that operates under such a chilling effect is neither free nor fair. Having a handful of wealthy donors and their advisors dictate the evolution of an entire field is bad epistemics at best and corruption at worst. 

The greatest predictor of how negatively a reviewer would react to the paper was their personal identification with EA. Writing a critical piece should not incur negative consequences on one’s career options, personal life, and social connections in a community that is supposedly great at inviting and accepting criticism.

My interpretation from this was that they were strongly discouraged from publishing the paper by people who disagreed with what the paper was claiming (who may or may not have also been self-interested in maintaining their funding) and/or predicted negative outcomes from the work. 

They were still able to publish the paper and participate in the community. No-one was 'shut down' in the sense that someone forced them not to publish it (though they may have strongly advised against it). Is this correct? Maybe I misunderstand what "prevent this paper from being published" actually entailed.
 

Replies from: freedomandutility
comment by freedomandutility · 2021-12-29T03:56:15.545Z · EA(p) · GW(p)

Ah okay.

I think I interpreted this as ‘pressure’ to not publish, and my definition of ‘shutting down ideas’ includes pressure / strong advice against publishing them, while yours is restricted to forcing people not to publish them.

comment by jsteinhardt · 2021-12-29T22:27:12.043Z · EA(p) · GW(p)

At the same time, what occurred mostly sounded reasonable to me, even if it was unpleasant. Strong opinions were expressed, concerns were made salient, people may have been defensive or acted with some self-interest, but no one was forced to do anything. Now the paper and your comments are out, and we can read and react to them. I have heard much worse in other academic and professional settings.

 

I don't think "the work got published, so the censorship couldn't have been that bad" really makes sense as a reaction to claims of censorship. You won't see work that doesn't get published, so this is basically a catch-22 (either it gets published, in which cases there isn't censorship, or it doesn't get published, in which case no one ever hears about it).

Also, most censorship is soft rather than hard, and comes via chilling effects.

(I'm not intending this response to make any further object-level claims about the current situation, just that the quoted argument is not a good argument.)

comment by AlasdairGives · 2021-12-29T01:21:24.855Z · EA(p) · GW(p)

I think it is disappointing that so many comments are focusing on arguing with the paper rather than discussing the challenges outlined in the post. From a very quick reading I don't find any of the comments here unreasonable but I do find them to be talking about a different topic. It would be better if we could separate out the discussion of "red teaming" EA from the discussion of this particular paper

Replies from: Charles He, Khorton
comment by Charles He · 2021-12-29T01:55:26.550Z · EA(p) · GW(p)

The paper is very well written, crisp and communicates its points very well.

The paper includes characterizations of longtermists that seem schematic and many would find unfair. 

In the post itself, there are serious statements that add a lot of heat to the issue and are hard to approach.

I think that this is a difficult time where many people are getting/staying out away, or performing emotional labor, for what are genuinely difficult experiences of the OP. 

This isn't ideal for truthseeking. 

If I was in a different cause area with a similar issue, I wouldn't want a lot of longtermists coming in and pulling on these  threads, I don't think that is the ideal or right thing to do.

comment by Kirsten (Khorton) · 2021-12-29T01:29:15.945Z · EA(p) · GW(p)

Interesting, I was thinking the opposite! I was thinking, "There's so many interesting specific suggestions in this paper and people are just caught up on whether or not they like diversity initiatives generally and what they think of the tone on this paper, how annoying."

Replies from: AlasdairGives
comment by AlasdairGives · 2021-12-29T01:50:19.655Z · EA(p) · GW(p)

I just mean this could have been two posts - one about the paper and one about the experience of publishing the paper. Both would be very valuable.

Replies from: willbradshaw, Halstead, Khorton
comment by Will Bradshaw (willbradshaw) · 2021-12-29T08:05:12.185Z · EA(p) · GW(p)

I agree it would have been better to have this as two posts – I'm personally finding it difficult to respond to either the paper or the post, because when I focus on one I feel like I'm ignoring important points in the other.

That said, the fact that both are being discussed in a single post is down to the authors, not the commenters. I think it's reasonable for any given commenter to focus on one without justifying why they're neglecting the other.

comment by John G. Halstead (Halstead) · 2021-12-29T21:03:28.528Z · EA(p) · GW(p)

Yeah I agree. I disagree with most of the paper, but I find the claims about pressures not to publish criticism  troubling. 

comment by Kirsten (Khorton) · 2021-12-29T02:39:15.849Z · EA(p) · GW(p)

Completely agree!

comment by Nathan Young (nathan) · 2021-12-28T23:17:48.882Z · EA(p) · GW(p)

How do we solve this?

These individuals—often senior scholars within the field—told us in private that they were concerned that any critique of central figures in EA would result in an inability to secure funding from EA sources, such as OpenPhilanthropy. We don't know if these concerns are warranted. Nonetheless, any field that operates under such a chilling effect is neither free nor fair. Having a handful of wealthy donors and their advisors dictate the evolution of an entire field is bad epistemics at best and corruption at worst. 

If I imagine myself dependent on the funding of someone, that would change my behaviour. Anyone have any ideas of how to get around this? 

- Tenure is the standard academic approach but does that lead to better work overall
- A wider set of funders who will fund work even if it attacks the other funders?
- OpenPhil making a statement to fund high quality work they disagree with
- Some kind of way to anonymously survey EA academics to get a sense of if there is a point that everyone thinks but it too scared to say
- Some kind of prediction market on views that are likely to be found to be wrong in the future.

Replies from: John_Maxwell_IV, finm
comment by John_Maxwell (John_Maxwell_IV) · 2021-12-29T14:40:12.862Z · EA(p) · GW(p)

I think offering financial incentives specifically for red teaming makes sense. I tend to think red teaming is systematically undersupplied because people are concerned (often correctly in my experience with EA) that it will cost them social capital, and financial capital can offset that.

I'm a fan of the CEEALAR funding model -- giving small amounts to dedicated EAs, with less scrutiny and less prestige distribution. IMO it is less incentive-distorting than more popular EA funding models.

comment by finm · 2021-12-29T10:08:46.202Z · EA(p) · GW(p)

Most these ideas sound interesting to me. However —

- OpenPhil making a statement to fund high quality work they disagree with

I'm not quite sure what this means? I'm reading it as "funding work which looks set to make good progress on a goal OP don't believe is especially important, or even net bad".  And that doesn't seem right to me.

Similar ideas that could be good —

  • OP/other grantmakers clarifying that they will consider funding you on equal terms even if you've publicly criticised OP/that grantmaker
  • More funding for thoughtful criticisms of effective altruism and longtermism (theory and practice)

I'm especially keen on the latter!

Replies from: Kerkko Pelttari
comment by Kerkko Pelttari · 2021-12-29T11:49:16.604Z · EA(p) · GW(p)

Perhaps a general "willingness to commit" X % funding to criticism of areas which are heavily funded by the EA-aligned funding organization could work as a general heuristic for enabling the second idea.

 

(e.g. if "pro current X-risk" research in general gets N funding then some % of N would be made available for "critical work" in the same area. But in science it can be sometimes hard to even say which is a critical work and which is a work that builds on top of existing work.)

Replies from: finm
comment by finm · 2021-12-29T13:21:01.618Z · EA(p) · GW(p)

Sounds good. At the more granular and practical end, this sounds like red-teaming, which is often just good practice.

comment by EdoArad (edoarad) · 2021-12-28T16:42:48.756Z · EA(p) · GW(p)

Strong upvote, especially to signal my support of

These individuals—often senior scholars within the field—told us in private that they were concerned that any critique of central figures in EA would result in an inability to secure funding from EA sources, such as OpenPhilanthropy. We don't know if these concerns are warranted. Nonetheless, any field that operates under such a chilling effect is neither free nor fair. Having a handful of wealthy donors and their advisors dictate the evolution of an entire field is bad epistemics at best and corruption at worst. 

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2021-12-28T22:23:15.581Z · EA(p) · GW(p)

Maybe my models are off but I find it hard to believe that anyone actually said that. Are we sure people said "Please don't criticize central figures in EA because it may lead to an inability to secure EA funding?" 

That sounds to me like a thing only cartoon villains would say. 

Replies from: oagr, Joey, CarlaZoeC, Davidmanheim, anonymousEA
comment by Ozzie Gooen (oagr) · 2021-12-28T23:41:56.348Z · EA(p) · GW(p)

I might be able to provide a bit of context:

I think the devil is really in the details here. I think there are some reasonable versions of this. 

The big question is why and how you're criticizing people, and what that reveals about your beliefs (and what those beliefs are).

As an extreme example, imagine if a trusted researcher came out publicly, saying,
"EA is a danger to humanity because it's stopping us from getting to AGI very quickly, and we need to raise as much public pressure against EA as possible, as quickly as possible. We need to shut EA down."

If I were a funder, and I were funding researchers, I'd be hesitant to fund researchers who both believed that and was taking intense action accordingly. Like, they might be directly fighting against my interests.

It's possible to use criticism to improve a field or try to destroy it.

I'm a big fan of positive criticism, but think that some kinds of criticism can be destructive (see a lot of politics, for example)

I know less about this certain circumstance, I'm just pointing out how the other side would see it.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2021-12-29T09:21:14.792Z · EA(p) · GW(p)

This is all reasonable but none of your comment addresses the part where I'm confused. I'm confused about someone saying something that's either literally the following sentence, or identical in meaning to: 

"Please don't criticize central figures in EA because it may lead to an inability to secure EA funding." 

If I were a funder, and I were funding researchers, I'd be hesitant to fund researchers who both believed that and was taking intense action accordingly. Like, they might be directly fighting against my interests.


That part of the example makes sense to me. What I don't understand is the following:

In your example, imagine you're a friend, colleague, or an acquaintance of that researcher who considers publishing their draft about how EA needs to be stopped because it's slowing down AGI. What do you tell them? It seems like telling them "The reason you shouldn't publish this piece is that you [or "we," in case you're affiliated with them] might no longer get any funding" is a strange non sequitur. If you think they're right about their claim, it's really important to publish the article anyway. If you think they're wrong, there are still arguments in favor of discussing criticism openly, but also arguments against confidently advocating drastic measures unilaterally and based on brittle arguments. If you thought the article was likely to do damage, the intrinsic damage is probably larger than no longer getting funding? 

I can totally see EAs advocating against the publication of certain articles that they think are needlessly incendiary and mostly wrong, too uncharitable, or unilateral and too strongly worded. I don't share those concerns personally (I think open discussion is almost always best), but I can see other people caring about those things more strongly. I was thrown off by the idea that people would mention funding as the decisive consideration against publication. I still feel confused about this, but now I'm curious.

 

comment by Joey · 2021-12-29T01:35:50.311Z · EA(p) · GW(p)

"Please don't criticize central figures in EA because it may lead to an inability to secure EA funding?" I have heard this multiple times from different sources in EA. 

Replies from: Halstead, Evan_Gaensbauer, Andre_C
comment by John G. Halstead (Halstead) · 2021-12-29T19:38:42.162Z · EA(p) · GW(p)

This is interesting if true. With respect to this paper in particular, I don't really get why anyone would advise the authors not to publish it. It doesn't seem like it would affect CSER's funding, since as I understand it (maybe I'm wrong) they don't get much EA money and it's hard to see how it would affect FHI's funding situation. The critiques don't seem to me to be overly personal, so it's difficult to see why publishing it would be overly risky. 

Replies from: Khorton
comment by Kirsten (Khorton) · 2021-12-30T00:57:05.498Z · EA(p) · GW(p)

Why "if true"? Why would Joey misrepresent his own experiences?

Replies from: Halstead
comment by John G. Halstead (Halstead) · 2021-12-30T08:49:08.843Z · EA(p) · GW(p)

yeah fair i didn't mean it like that

comment by Evan_Gaensbauer · 2022-01-15T10:07:14.277Z · EA(p) · GW(p)

Strongly upvoted, and me too. Which sources do you have in mind? We can compare lists if you like. I'd be willing to have that conversation in private but for the record I expect it'd be better to have it in public, even if you'd only be vague about it.

comment by Andre_C · 2022-01-14T15:48:04.997Z · EA(p) · GW(p)

I think the rationale behind making such a statement is less about specific funding for the individuals making that statement, but for the EA movement as a whole and goes roughly: Most of the funding EA has is coming from a small number of high-net-worth individuals and they think donating to EA is a good idea because of their relationship and trust into central figures in EA. By criticising those figures, you decrease the chance of these figures pulling more high-net-worth individuals to donate to EA. Hence, criticising central figures in EA is bad.

(Not saying that I agree with this line of reasoning, but it seems plausible to me that people would make such a statement because of this reasoning.)

comment by CarlaZoeC · 2021-12-28T22:58:17.236Z · EA(p) · GW(p)

Very happy to have a private chat and tell you about our experience then. 

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2021-12-28T23:30:51.613Z · EA(p) · GW(p)

I'm curious about this and would be happy to hear more about it if you're comfortable sharing. I'll get in touch (and would make sure to read the full article before maybe chatting)! 

comment by Davidmanheim · 2021-12-29T08:14:29.476Z · EA(p) · GW(p)

I want to flag that "That sounds to me like a thing only cartoon villains would say." is absolutely contrary to discourse norms on the forum. I don't think it was said maliciously, but it's definitely not "kind," and it does not "approach disagreements with curiosity."

Edit: Clearly, I read this very differently than others, and given that, I'm happy to retract my claim that this was mean-spirited.

Replies from: Lukas_Gloor, aarongertler, MaxRa
comment by Lukas_Gloor · 2021-12-29T10:23:57.012Z · EA(p) · GW(p)

When I wrote my comment, I worried it would be unkind to Zoe because I'm also questioning her recollection of what people said.

Now that it looks like people did in fact say the thing exactly the way I quoted it (or identical to it in meaning and intent), my comment looks more unkind toward Zoe's critics.  

Edit: Knowing for sure that people actually said the comment, I obviously no longer think they must be cartoon villains. (But I remain confused.) 
 

Replies from: CarlaZoeC, Halstead
comment by CarlaZoeC · 2021-12-29T11:02:56.218Z · EA(p) · GW(p)

fwiw I was not offended at all. 

comment by John G. Halstead (Halstead) · 2021-12-29T14:06:49.232Z · EA(p) · GW(p)

I'm a bit lost, are you saying that the quotes you have seen were or were not as cartoon villainish as you thought?

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2021-12-29T14:38:18.125Z · EA(p) · GW(p)

I haven't seen any quotes but Joey saying he had the same experience [EA(p) · GW(p)], Zoe confirming that she didn't misremember this part, and none of the reviewers speaking up saying "This isn't how things happened," made me update that maybe one or more people actually did say the thing I considered cartoonish.

And because people are never cartoon villains in real life, I'm now trying to understand what their real motivations were. 

For instance, one way I thought of how the comment could make sense is if someone brought it up because they are close to Zoe and care most about her future career and how she'll be doing, and they already happen to have a (for me very surprising) negative view of EA funders and are pessimistic about bringing about change. In that scenario, it makes sense to voice the concerns for Zoe's sake.

Initially, I simply assumed that the comment must be coming from the people who have strong objections to (parts of) Zoe's paper. And I was thinking "If you think the paper is really unfair, why not focus on that? Why express a concern about funding that only makes EA look even worse?"

So my new model is that the people who gave Zoe this sort of advice may not have been defending EA at all, but rather shared Zoe's criticisms or were, if anything, more pessimistic than Zoe. 

(I'm probably wrong about the above hypothesis, but then I'm back to being confused.) 

Replies from: Halstead, Telofy, Guy Raveh
comment by John G. Halstead (Halstead) · 2021-12-29T19:42:05.758Z · EA(p) · GW(p)

It might be useful to hear from the reviewers themselves as to the thought process here. As mentioned above, I don't really understand why anyone would advise the authors not to publish this. For comparison, I have published several critiques of the research of several Open Phil-funded EA orgs while working at an open phil-funded EA org. In my experience, I think if the  arguments are good, it doesn't really matter if you disagree with something Open Phil funds. Perhaps that is not true in this domain for some reason?

comment by Dawn Drescher (Telofy) · 2021-12-29T20:27:01.093Z · EA(p) · GW(p)

This is also how I interpreted the situation.

(In my words: Some reviewers like and support Zoe and Luke but are worried about the sustainability of their funding situation because of the model that these reviewers have of some big funders. So these reviewers are well-intentioned and supportive in their own way. I just hope that their worries are unwarranted.)

comment by Guy Raveh · 2021-12-29T15:57:42.754Z · EA(p) · GW(p)

I think a third hypothesis is that they really think funding whatever we are funding at the moment is more important than continuing to check whether we are right; and don't see the problems with this attitude (perhaps because the problem is more visible from a movement-wide, longterm perspective rather than an immediate local one?).

comment by Aaron Gertler (aarongertler) · 2021-12-29T10:02:10.869Z · EA(p) · GW(p)

As a moderator, I thought Lukas's comment was fine.

I read it as a humorous version of "this doesn't sound like something someone would say in those words", or "I cast doubt on this being the actual thing someone said, because people generally don't make threats that are this obvious/open".  

Reading between the lines, I saw the comment as "approaching a disagreement with curiosity" by implying a request for clarification or specification ("what did you actually hear someone say"?). Others seem to have read the same implication, though Lukas could have been clearer in the first place and I could be too charitable in my reading.

Compared to this comment [EA(p) · GW(p)], I thought Lukas's added something to the conversation (though the humor perhaps hurt more than helped).

*****

On a meta level, I upvoted David's comment because I appreciate people flagging things for potential moderation, though I wish more people would use the Report button attached to all comments and posts (which notifies all mods automatically, so we don't miss things):

comment by MaxRa · 2021-12-29T09:02:38.285Z · EA(p) · GW(p)

I appreciated Lukas‘ comment as I had the same reaction. The idea somebody would utter this sentence and not cringe about having said something so obviously wrongheaded feels very off. I think adding something like „Hey, this specific claim would be almost shockingly surprising for my current models /gesturing at the reason why/“  is a useful  promp/invitation for further discussion and not unkind or uncurios.

comment by anonymousEA · 2021-12-28T23:04:40.776Z · EA(p) · GW(p)

That sounds to me like a thing only cartoon villains would say. 

 

...oh dear

This community is entering a rough patch, I feel.

Replies from: aarongertler, Davidmanheim
comment by Aaron Gertler (aarongertler) · 2021-12-29T09:56:43.786Z · EA(p) · GW(p)

As a moderator, I agree with David that this comment doesn't abide by community norms. 

It's not a serious offense, because "oh dear" is a mild comment that isn't especially detrimental to a conversation on its own. But if a reply implies that a post or comment is representative of some bad trend, or that the author should feel bad/embarrassed about what they wrote, and doesn't actually say why, it adds a lot more heat than light.

comment by Davidmanheim · 2021-12-29T08:15:36.778Z · EA(p) · GW(p)

I commented that the above comment doesn't abide by community norms, but I don't think this comment does, either. 

Commenting guidelines:

  • Aim to explain, not persuade
  • Try to be clear, on-topic, and kind
  • Approach disagreements with curiosity
comment by Davidmanheim · 2021-12-28T21:37:28.437Z · EA(p) · GW(p)

Good for you!

I'm sad that this seemed necessary, and happy to see that despite some opposition, it was written published. I sincerely hope that the cynics saying it could damage your credibility or careers are wrong, and that most of the criticisms are not as severe as they may seem - but if so, it's great that the issues are being pointed out, and if not, it's critical that they are.

comment by berglund (lukasberglund) · 2021-12-28T20:13:56.926Z · EA(p) · GW(p)

Thanks for writing this! It seems like you've gone through a lot in publishing this. I am glad you had the courage and grit to go through with it despite the backlash you faced. 

comment by Dawn Drescher (Telofy) · 2021-12-30T00:26:50.545Z · EA(p) · GW(p)

Sorry if this is a bit of a tangent but it seems possible to me to frame a lot of the ideas from the paper as wholly uncontroversial contributions to priorities research. In fact I remember a number of the ideas being raised in the spirit of contributions by various researchers over the years, for which they expected appreciation and kudos rather than penalty.

(By “un-/controversial” I mean socially un-/controversial, not intellectually. By socially controversial I mean the sort of thing that will lead some people to escalate from the level of a truth-seeking discussion to the level of interpersonal conflict.)

It think it’s more a matter of temperament than conviction that I prefer the contribution framing to a respectful critique. (By “respectful” I mean respecting feelings, dignity, and individuality of the addressees, not authority/status. Such a respectful critique can be perfectly irreverent.) Both probably have various pros and cons in different contexts.

But one big advantage of the contribution framing seems to be that it makes the process of writing, review, and publishing a lot less stressful because it avoids antagonizing people – even though they ideally shouldn’t feel antagonized either way.

Another is evident in this comment section: The discussion is a wild mix of threads about community dynamics and actual object-level responses to the paper. Similarly I had trouble deciding whether to upvote or to strong-upvote the post: The paper touches on many topics, so naturally my thoughts about its object-level merits are all over the place. But I feel very strongly that, as community norms go, such respectful critiques are invaluable and ought to be strongly encouraged. Then again Simon Grimm’s suggestion to make a separate link-post for the object-level discussion would also address that.

But the critique framing seems to have the key advantage of a signal boost. I think Logan and Duncan observed that social posts generated a lot more engagement on Less Wrong than epistemic posts (which I haven’t tried to confirm), and Scott Alexander’s “Toxoplasma of Rage,” though a much more extreme case, seems to feed on similar dynamics. So maybe there are particular merits to the critique framing, at least when the content is so important that the community gains from the signal boost; it’s probably one of those powerful tools that ought to be handled with great care.

Are there heuristics for when a critique framing is warranted or even better than a contribution framing? Was it the correct choice to go for a critique framing in this case?

comment by Kirsten (Khorton) · 2021-12-29T10:49:30.299Z · EA(p) · GW(p)

A lot of this comments are at their heart debating technocracy vs populism in decision-making. A separate conversation on this topic has been started here: https://forum.effectivealtruism.org/posts/yrwTnMr8Dz86NW7L4/technocracy-vs-populism-including-thoughts-on-the [EA · GW]

comment by Raven · 2021-12-29T18:54:57.492Z · EA(p) · GW(p)

Thanks for sharing this, Zoe!

I think your piece is valuable as a summary of weaknesses in existing longtermist thinking, though I don't agree with all your points or the ways you frame them.

Things that would make me excited to read future work, and IMO would make that work stronger:

  • Providing more concrete suggestions for improvement. Criticism is valuable, but I'm aware of many of the weaknesses of our frameworks; what I'm really hungry for is further work on solving them. This probably requires focusing down to specific areas, rather than casting a wide net as you did for this summary paper. 
  • Engaging with the nuances of longtermist thinking on these subjects. For example, when you mention the importance of risk-factor assessment, I don't see much engagement with e.g. the risk factor / threat / vulnerability model, or with the paper on defense in depth against AI risk. Neither of these models are perfect, but I expect they both have useful things to offer.
    • I expect this links up with the above point. Starting from a viewpoint of what-can-I-build  encourages finding the strong points of prior work, rather than the weak points you focused on in this piece.
comment by berglund (lukasberglund) · 2021-12-28T20:03:21.432Z · EA(p) · GW(p)

What does TUA stand for?

Replies from: quinn, Patrick
comment by quinn · 2021-12-28T20:46:41.070Z · EA(p) · GW(p)

Techno-utopian approach (via paper abstract)

Replies from: lukasberglund
comment by Patrick · 2022-01-02T00:41:42.205Z · EA(p) · GW(p)

I would've found it helpful if the post included a definition of TUA (as well as saying what what it stands for). Here's a relevant excerpt from the paper:

The TUA [techno-utopian approach] is a cluster of ideas which make up the original paradigm within which the field of ERS [existential-risk studies] was founded. We understand it to be primarily based on three main pillars of belief: transhumanism, total utilitarianism and strong longtermism. More precisely: (1) the belief that a maximally technologically developed future could contain (and is defined in terms of) enormous quantities of utilitarian intrinsic value, particularly due to more fulfilling posthuman modes of living; (2) the failure to fully realise or have capacity to realise this potential value would constitute an existential catastrophe; and, (3) we have an overwhelming moral obligation to ensure that such value is realised by avoiding an existential catastrophe, including through exceptional actions.

comment by Nathan Young (nathan) · 2021-12-28T23:13:34.583Z · EA(p) · GW(p)

How could we solve this?

Singer started the Journal of Controversial Ideas, which lets people publish under pseudonyms. 

https://journalofcontroversialideas.org/

Maybe more should try and publish criticisms there, or there could be funding for an EA specific journal with similar rules.

I guess there are problems with this suggestion, let me know what they are.

Replies from: finm
comment by finm · 2021-12-29T10:13:02.515Z · EA(p) · GW(p)

I like the idea of setting up a home for criticisms of EA/longtermism. Although I guess the EA Forum already exists as a natural place for anyone to post criticisms, even anonymously. So I guess the question is — what is the forum lacking? My tentative answer might be prestige / funding. Journals offer the first. The tricky question on the second is: who decides which criticisms get awarded? If it's just EAs, this would be disingenuous.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2021-12-29T15:06:16.728Z · EA(p) · GW(p)

I think people don't appreciate how much upvotes and especially downvotes can encourage conformity.

Suppose a forum user has drafted "Comment C", and they estimate an 90% chance that it will be upvoted to +4, and a 10% chance it will be downvoted to -1.

Do we want them to post the comment? I'd say we do -- if we take score as a proxy for utility, the expected utility is positive.

However, I submit that for most people, the 10% chance of being downvoted to -1 is much more salient in their mind -- the associated rejection/humiliation of -1 is a bigger social punishment than +4 is a social reward, and people take those silly "karma" numbers surprisingly seriously.

It seems to me that there are a lot of users on this forum who have almost no comments voted below 0, suggesting a revealed preference to leave things like "Comment C" unposted (or even worse, they don't think the thoughts that would lead to "Comment C" in the first place). People (including me) just don't seem very willing to be unpopular. And as a result, we aren't just losing stuff that would be voted to -1. We're losing stuff which people thought might be voted to -1.

(I also don't think karma is a great proxy for utility. People are more willing to find fault with & downvote comments that argue for unpopular views, but I'd say arguments for unpopular views have higher value-of-information and are therefore more valuable to post.)

In terms of solutions... downvoting less is an obvious one. I like how Hacker News hides comment scores. Another idea is to disable scores on a thread-specific basis, e.g. in shortform.

comment by Mahdi Complex · 2021-12-28T16:42:29.329Z · EA(p) · GW(p)

We can’t afford to wait for a “Long Reflection”.

Alternatively, the "Long Reflection" has already begun, it's just not very evenly distributed. And humanity has a lot of things to hash out.

comment by BrownHairedEevee (evelynciara) · 2021-12-29T07:02:51.920Z · EA(p) · GW(p)

At an object level, I appreciate this statement on page 15:

If collective resources (such as research funding and public attention) are to be allocated to the highest priority risk, then ERS should attempt to find a more evidence-based, replicable prioritisation procedure.

At a meta level, thank you for your bravery and persistence in publishing this paper. I've added some tags to this post, including Criticism of the effective altruism community [? · GW].

comment by Matthew_Barnett · 2021-12-30T03:28:30.307Z · EA(p) · GW(p)

I'm happy with more critiques of total utilitarianism here. :) 

For what it's worth, I think there are a lot of people unsatisfied with total utilitarianism within the EA community. In my anecdotal experience, many longtermists (including myself) are suffering focused. This often takes the form of negative utilitarianism, but other variants of suffering focused ethics exist.

I may have missed it, but I didn't see any part of the paper that explicitly addressed suffering-focused longtermists. (One part mentioned, "Preventing existential risk is not primarily about preventing the suffering and termination of existing humans.").

I think you might be interested in the arguments made for caring about the long-term future from a suffering-focused perspective. The arguments for avoiding existential risk are translated into arguments for reducing s-risks

I also think that suffering-focused altruists are not especially vulnerable to your argument about moral pluralism. In particular, what matters to me is not the values of humans who exist now but the values of everyone who will ever exist. A natural generalization of this principle is the idea that we should try to step on as few people's preferences as possible (with the preferences of animals and sentient AI included), which leads to a sort of negative preference utilitarianism.

comment by John G. Halstead (Halstead) · 2021-12-28T19:36:54.926Z · EA(p) · GW(p)

Clarificatory question - are you arguing here that  stagnation at the current level of technology would be a good thing? 

If so, there seem to be several problems with this. this seems like a very severe bound on human potential. even in the richest countries in the world, most people work for a ~third of their life in jobs they  find incredibly boring. 

It also seems like this would expose us to indefinite risk from engineered pandemics. what do you make of that risk?

It also seems unlikely that climate change will be fixed without very strong technological progress in things like zero carbon fuels, energy storage etc.

Replies from: anonea2021
comment by anonea2021 · 2021-12-28T19:53:38.495Z · EA(p) · GW(p)

Consider that this might be coming from a techno-utopian perspective itself? We could very plausibly stop or at least delqay climate change by drastically reducing the use of technology right now (COVID bought us a few months just by shutting down planes although that has "recovered" now ) and focus on rolling out existing technology. And there are (granted, fringe) political positions that argue industrialisation and maybe even agriculture was a mistake and critique the "civilisation" narrative (and no, not all of them are arguing to abandon medicine and live like cavemen, it's more nuanced than that).

 

I'm not saying you are "wrong", I'm saying that the instinct to judge  coming up with a magic technology to allow economic growth and the current state of life while fixing climate change as more likely than global coordination to use existing  technology in more sustainable ways feels techno-utopian to me. Technology causes problems? Just add more technology!

Replies from: Halstead, Pablo_Stafforini
comment by John G. Halstead (Halstead) · 2021-12-28T22:33:53.328Z · EA(p) · GW(p)

In my view, covid is a very  dramatic counter-example to the benefits of technological stagnation/degrowth for the climate. Millions of people died, billions of people were locked in their homes for months on end and travel was massively reduced. In spite of that, emissions in 2020 merely fell to 2011 levels. The climate challenge is to get to net zero emissions. A truly enormous humanitarian cataclysm would be required to make that happen without improved technology. 

On your last paragraph, the instinct you characterise as techno-utopian here just seems to me to be clearly correct. It just seems true that we are more likely to solve climate change by making better low carbon tech than we are to get everyone to get all countries to agree to stop all technological progress. Consider emissions from cars. Suppose for the sake of argument that electric cars were as advanced as they were ten years ago and were not going to improve. What, then, would be involved in getting car emissions to zero? On your approach, the only option seems to be for billions of people to give up their cars, and for them only to be accessible to people who can afford a $100k Tesla. That approach is  obviously less likely to succeed than the 'techno-optimist' one of making electric cars better (which is the path we have taken, with significant success)

Replies from: Davidmanheim
comment by Davidmanheim · 2021-12-29T08:29:03.589Z · EA(p) · GW(p)

It just seems true that we are more likely to solve climate change by making better low carbon tech than we are to get everyone to get all countries to agree to stop all technological progress. 

That's obviously a false dilemma. Investing in better use of technology and new technology is great, but doesn't help without reforms that internalize the externalities of climate change. If we don't subsidize CCS or tax net carbon, unless it's somehow cheaper to capture it than to let it remain in the air, we won't reduce CO2 in the atmosphere just with technology, and we'll end up with tons of additional warming. 

Replies from: Halstead, Larks
comment by John G. Halstead (Halstead) · 2021-12-29T14:05:39.335Z · EA(p) · GW(p)

Hi David, I was arguing against this point:

"I'm saying that the instinct to judge  coming up with a magic technology to allow economic growth and the current state of life while fixing climate change as more likely than global coordination to use existing  technology in more sustainable ways feels techno-utopian to me."

So, the author was saying that s/he thinks we are more likely to solve climate change by global coordination with zero technological progress than we are through continued economic growth and technological progress. I argued that this wasn't true. This isn't a false dichotomy, I was discussing the dichotomy explicitly made by the author in the first place. 

My claim is that without technological progress in electricity, industry and transport we are extremely unlikely to solve climate change, which is the point that luke kemp seems to disagree with. 

Replies from: Davidmanheim
comment by Davidmanheim · 2021-12-29T18:25:02.656Z · EA(p) · GW(p)

Ah. Yes, that makes sense. And it seems pretty clear that I don't disagree with you on the factual question of what is likely to work, but I also don't know what Luke thinks other than what he wrote in this paper, and I was confused about why it was being brought up.

comment by Larks · 2021-12-29T09:37:55.631Z · EA(p) · GW(p)

How is this a false dilemma?

  • Stop all technological progress
  • Advance low carbon technology

Technically it omits a third option (technological progress in areas other than low carbon technology) but it certainly seems to cover all the relevant possibilities to me. Whether we have carbon taxes and so on is a somewhat separate issue: Halstead is arguing that without technological progress, sufficiently high carbon taxes would be ruinously expensive. 

Replies from: Davidmanheim
comment by Davidmanheim · 2021-12-29T13:09:32.996Z · EA(p) · GW(p)

The presented dilemma omits the possibility that we can allow for technological progress while limiting the deployment of some technologies - like coal power plants and fossil fuel burning cars. That's what makes it a false dilemma - it presupposes that the only alternative is to stop all technology, which isn't the only  alternative.

Replies from: Halstead
comment by John G. Halstead (Halstead) · 2021-12-29T14:17:32.996Z · EA(p) · GW(p)

but this is differential technological development, which the authors strongly reject. The author and commenter explicitly ask us to consider how well we would fare if we stopped technological progress entirely

Replies from: Davidmanheim
comment by Davidmanheim · 2021-12-29T18:42:06.138Z · EA(p) · GW(p)

The authors don't reject differential technological development as much as they claim that no real case has been made for it in the relevant domains. Specifically, "why this is more tractable or effective than bans, moratoriums and other measures has not been fully explained and defended."

But that statement by the authors, and others I have found, aren't claims that all technological progress should be stopped. So I think this is a false dilemma. For example, their suggested approach  applies to the way that the world has managed previous dangerous technologies like nuclear weapons and bioweapons - we ban use and testing, largely successfully, instead of the idea they reject, which would be, I guess, differentially preferring to fund defense-dominant technology because use of nuclear and bioweapons is inevitable, and assuming that due to the technological completion hypothesis, the technology can't be stopped.

comment by Pablo (Pablo_Stafforini) · 2021-12-28T19:59:32.614Z · EA(p) · GW(p)

Technology causes problems? Just add more technology!

"it's more nuanced than that".

comment by Matt Boyd · 2022-01-06T08:54:39.630Z · EA(p) · GW(p)

Thanks Carla and Luke for a great paper. This is exactly the sort of antagonism that those not so deeply immersed in the xrisk literature can benefit from, because it surveys so much and highlights the dangers of a single core framework. Alternatives to the often esoteric and quasi-religious far-future speculations that seem to drive a lot of xrisk work are not always obvious to decision makers and that gap means that the field can be ignored as 'far fetched'. Democratisation is a critical component (along with apoliticisation). 

I must say that it was a bit of a surprise to me that TUA is seen as the paradigm approach to ERS. I've worked in this space for about 5-6 years and never really felt that I was drawn to strong-longtermism or transhumanism, or technological progress. ERS seems like the limiting case of ordinary risk studies to me. I've worked in healthcare quality and safety (risk to one person at a time), public health (risk to members of populations) and extinction risk just seems like the important and interesting limit of this. I concur with the calls for grounding in the literature of risk analysis, democracy, and pluralism. In fact in peer reviewed work I've previously called for citizen juries and public deliberation and experimental philosophy in this space (here), and for apolitical, aggregative processes (here), as well as calling for better publicly facing national risk (and xrisk) communication and prioritisation tools (under review with Risk Analysis). 

Some key points I appreciated or reflected on in your paper were: 

  1. The fact that empirical and normative assumptions are often masked by tools and frameworks
  2. The distinction between extinction risk and existential risk. 
  3. The questioning of total utilitarianism (I often prefer a maximin approach, also with consideration of important [not necessarily maximising] value obtained from honouring treaties, equity, etc)
  4. I've never found the 'astronomical waste' claims hold up particularly well under certain resolutions of Fermi's paradox (basically I doubt the moral and empirical claims of TUA and strong longtermism, and yet I am fully committed to ERS)
  5. The point about equivocating over near-term nuclear war and billion year stagnation
  6. Clarity around Ord's 1 in 6 (extinction/existential) - I'm guilty of conflating this
  7. I note that failing to mitigate 'mere' GCRs could also derail certain xrisk mitigation efforts. 

Again, great work. This is a useful and important  broad survey/stimulus, not every paper needs to take a single point and dive to its bottom. Well done. 

comment by Guy Raveh · 2021-12-29T08:04:46.393Z · EA(p) · GW(p)

I haven't opened the paper yet - this is a reply to the content of the forum post.

Thank you for writing it. I completely agree with you: EA has to not only tolerate critics, but also encourage critical debate among its members.

Disabling healthy thought processes for fear of losing funding is disastrous, and puts in question the effectiveness of such begotten funding.

I furthermore agree with all the changes you suggested the movement should make.

comment by Denise_Melchin · 2021-12-30T17:46:14.328Z · EA(p) · GW(p)

Is there a non-PDF version of the paper available? (e.g. html)

From skimming a couple of the argments seem to be the same I brought up here [EA · GW] so I'd like to read the paper in full, but knowing myself I won't have the patience to get through a 35 page pdf.

comment by Kerkko Pelttari · 2021-12-29T11:43:22.606Z · EA(p) · GW(p)

I'm not affiliated with EA research organizations at all (I participate in running a local group at Finland and am looking for industry / other EA affiliated career options more so than specifically research).

 

However I have had multiple discussions with fellow local EA:s where it was deemed problematic that some X-risk papers are subject to quite "weak" standards of criticism relative to how much they often imply. Heartfelt thanks to you both for publishing and discussing this topic. And starting up conversation on the important meta-topic of EA research topic and funding decisionmaking and standards.

comment by anonymousEA · 2021-12-28T16:41:33.540Z · EA(p) · GW(p)


Thank you both from the bottom of my heart for writing this. I share many (but not all) of your views, but I don’t express them publicly because if I do my career will be over.

What you call the Techno-Utopian Approach is, for all intents and purposes, hegemonic within this field.

Newcomers (who are typically undergraduates not yet in their twenties) have the TUA presented to them as fact, through reading lists that aim to be educational. In fact, they are extremely philosophically, scientifically, and politically biased; when I showed a non-EA friend of mine a couple of examples, the first word out of their mouth was "indoctrination", and I struggle to substantively disagree.

These newcomers are then presented with access to billions of dollars in EA funding, on the unspoken (and for many EAs, I suspect honestly unknown) condition that they don't ask too many awkward questions.

I do not know everything about, ahem, recent events in multiple existential risk organisations, but it does not seem healthy. All the information I have points toward widespread emotional blackmail and quasi-censorship, and an attitude toward "unaligned" work that approaches full-on corruption.

Existential risk is too important to depend on the whims of a small handful of incredibly wealthy techbros, and the people who make this cause their mission should not have to fear what will happen to their livelihoods or personal lives if they publicly disagree with the views of the powerful.

We can't go on like this.

Replies from: anonymousEA, quinn
comment by anonymousEA · 2021-12-28T18:00:12.419Z · EA(p) · GW(p)

I'm genuinely not sure why I'm being downvoted here. What did I say?

Replies from: Stephen Clare, Guy Raveh, quinn
comment by Stephen Clare · 2021-12-28T19:59:14.360Z · EA(p) · GW(p)

I think it's because you're making strong claims without presenting any supporting evidence. I don't know what reading lists you're referring to; I have doubts about not asking questions being an 'unspoken condition' about getting access to funding; and I have no idea what you're conspiratorially alluding to regarding 'quasi-censorship' and 'emotional blackmail'.

Replies from: Lukas_Gloor, anonymousEA
comment by Lukas_Gloor · 2021-12-28T21:25:38.388Z · EA(p) · GW(p)

I also feel like the comment doesn't seem to engage much with the perspective it criticizes (in terms of trying to see things from that point of view). (I didn't downvote the OP myself.) 

When you criticize a group/movement for giving money to those who seem aligned with their mission, it seems relevant to acknowledge that it wouldn't make sense to not focus on this sort of alignment at all. There's an inevitable, tricky tradeoff between movement/aim dilution and too much insularity. It would be fair if you wanted to claim that EA longtermism is too far on one end of that spectrum, but it seems unfair to play up the bad connotations of taking actions that contribute to insularity, implying that there's something sinister about having selection criteria at all, without acknowledging that taking at least some such actions is part of the only sensible strategy.

I feel similar about the remark about "techbros." If you're able to work with rich people, wouldn't it be wasteful not to do it? It would be fair if you wanted to claim that the rich people in EA use their influence in ways that... what is even the claim here? That their idiosyncrasies end up having an outsized effect? That's probably going to happen in every situation where a rich person is passionate (and hands-on involved) about a cause – that doesn't mean that the movement around that cause therefore becomes morally problematic. Alternatively, if your claim is that rich people in EA engage in practices that are bad, that could be a a fair thing to point out, but I'd want to learn about the specifics of the claim and why you think it's the case.

I'm also not a fan of most EA reading lists but I'd say that EA longtermism addresses topics that up until recently haven't gotten a lot of coverage, so the direct critiques are usually by people who know very little about longtermism. And "indirect critiques" don't exist as a crisp category. If you wanted to write a reading list section to balance out the epistemic insularity effects in EA, you'd have to do a lot of pretty difficult work of unearthing what those biases are and then seeking out the exact alternative points of view that usefully counterbalance it. It's not as easy as adding a bunch of texts by other political movements – that would be too random. Texts written by proponents of other intellectual movements contain important insights, but they're usually not directly applicable to EA. Someone has to do the difficult work first of figuring out where exactly EA longtermism benefits from insights from other fields. This isn't an impossible task, but it's not easy, as any field's intellectual maturation takes time (it's an iterative process). Reading lists don't start out as perfectly balanced. To summarize, it seems relevant to mention (again) that there are inherent challenges to writing balanced reading lists for young fields. The downvoted comment skips over that and dishes out a blanket criticism that one could probably level against any reading list of a young field. 

Replies from: Guy Raveh
comment by Guy Raveh · 2021-12-29T08:15:50.414Z · EA(p) · GW(p)

If you're able to work with rich people, wouldn't it be wasteful not to do it? ... [T]heir idiosyncrasies end up having an outsized effect? That's probably going to happen in every situation where a rich person is passionate (and hands-on involved) about a cause

If that will happen whenever a rich person is passionate about a cause, then opting to work with rich people can cause more harm than good. Opting out certainly doesn't have to be "wasteful".

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2021-12-29T09:32:24.479Z · EA(p) · GW(p)

My initial thinking was that "idiosyncrasies" can sometimes be neutral or even incidentally good. 

But I think you're right that this isn't the norm and it can quickly happen that it makes things worse when someone only has a lot of influence because they have money, rather than having influence because they are valued by their peers for being unusually thoughtful.

(FWIW, I think the richest individuals within EA often defer to the judgment of EA researchers, as opposed to setting priorities directly themselves?) 

Replies from: Guy Raveh
comment by Guy Raveh · 2021-12-29T10:21:41.150Z · EA(p) · GW(p)

FWIW, I think the richest individuals within EA often defer to the judgment of EA researchers, as opposed to setting priorities directly themselves

I'm not saying I know anything to the contrary - but I'd like to point out that we have no way of knowing. This is a major disadvantage of philanthropy - where governments are required to be transparent regarding their fund allocations, individual donors are given privacy and undisclosed control over who receives their donations and what organisations are allowed to use them for.

comment by anonymousEA · 2021-12-28T21:01:10.912Z · EA(p) · GW(p)

My apologies, specific evidence was not presented with respect to...

Replies from: anonymousEA
comment by anonymousEA · 2021-12-28T21:23:35.854Z · EA(p) · GW(p)

Again, I'm really not sure where these downvotes are coming from. I'm engaging with criticism and presenting what information I can present as clearly as possible.

Replies from: Charles He
comment by Charles He · 2021-12-28T21:55:43.565Z · EA(p) · GW(p)

<Comment deleted>

Replies from: willbradshaw, aarongertler, Pablo_Stafforini, anonymousEA
comment by Will Bradshaw (willbradshaw) · 2021-12-29T08:11:33.694Z · EA(p) · GW(p)

I disagree with much of the original comment, but I'm baffled that you think this is appropriate content for the EA Forum. I strong-downvoted and reported this comment.

comment by Aaron Gertler (aarongertler) · 2022-01-02T20:51:50.850Z · EA(p) · GW(p)

While this comment was deleted, the moderators discussed it in its original form (which included multiple serious insults to another user) and decided to issue a two-week ban to Charles, starting today. We don't tolerate personal insults on the Forum.

comment by Pablo (Pablo_Stafforini) · 2021-12-29T22:22:21.719Z · EA(p) · GW(p)

Hi Charles. Please consider revising or retracting this comment; unlike your other comments in this thread, it's unkind and not adding to the conversation.

Replies from: Charles He
comment by Charles He · 2021-12-29T22:25:12.317Z · EA(p) · GW(p)

Per your personal request, I have deleted my comment.

comment by anonymousEA · 2021-12-28T23:08:32.588Z · EA(p) · GW(p)

...um

comment by Guy Raveh · 2021-12-29T08:21:59.619Z · EA(p) · GW(p)

Personally I more or less agreed with you and I don't think you were as insensitive as people suggested. I work in machine learning yet I feel shining a light on the biases and the oversized control of people in the tech industry is warranted and important.

comment by quinn · 2021-12-28T20:48:12.433Z · EA(p) · GW(p)

the word "techbros" signals  you have a kind of information diet and worldview that I think people have bad priors about 

Replies from: John_Maxwell_IV, anonymousEA
comment by John_Maxwell (John_Maxwell_IV) · 2021-12-29T15:20:45.220Z · EA(p) · GW(p)

IMO we should seek out and listen to the most persuasive advocates for a lot of different worldviews. It doesn't seem epistemically justified to penalize a worldview because it gets a lot of obtuse advocacy.

comment by anonymousEA · 2021-12-28T21:17:03.640Z · EA(p) · GW(p)

If people downvote comments on the basis of perceived ingroup affiliation rather than content then I think that might make OP's point for them...

Replies from: Davidmanheim, willbradshaw, Rubi
comment by Davidmanheim · 2021-12-28T21:49:36.103Z · EA(p) · GW(p)

I think that the dismissive and insulting language is at best unhelpful - and signaling your affiliations by being insulting to people you see as the outgroup seems like a bad strategy for engaging in conversation.

Replies from: anonymousEA
comment by anonymousEA · 2021-12-28T21:54:03.597Z · EA(p) · GW(p)

I apologise, I don't process it that way, I was simply using it as shorthand.

comment by Will Bradshaw (willbradshaw) · 2021-12-29T08:09:39.914Z · EA(p) · GW(p)

The "content" here is that you refer to the funders you dislike with slurs like "techbro". It's reasonable to update negatively in response to that evidence.

Replies from: anonymousEA
comment by anonymousEA · 2021-12-29T11:44:12.881Z · EA(p) · GW(p)

I'm sorry but can you please explain how "techbro" is a slur?

Replies from: willbradshaw, Khorton
comment by Will Bradshaw (willbradshaw) · 2021-12-29T19:51:19.391Z · EA(p) · GW(p)

It's straightforwardly a slur – to quote Google's dictionary, it is "a derogatory or insulting term applied to particular group of people".

It's not a term anyone would ever use to neutrally describe a group of people, or a term anyone would use to describe themselves (I have yet to see anyone "reclaim" "techbro"). Its primary conversational value is as an insult.

comment by Kirsten (Khorton) · 2021-12-29T12:29:51.108Z · EA(p) · GW(p)

I'm also surprised by how strongly people feel about this term! I've always thought techbro was a mildly insulting caricature of a certain type of Silicon Valley guy

Replies from: aarongertler
comment by Aaron Gertler (aarongertler) · 2021-12-30T09:13:00.893Z · EA(p) · GW(p)

Even if it's only a "mildly insulting caricature", it's still a way to claim that certain people are unintelligent or unserious without actually presenting an argument.

Compare:

  • "A small handful of incredibly wealthy techbros"
  • "A small handful of incredibly wealthy people with similar backgrounds in technology, which could lead to biases X and Y"

The first of these feels like it's trying to do the same thing as the second, without actually backing up its claim. 

When I read the second, I feel like someone is trying to make me think. When I read the first, I feel like someone is trying to make me stop thinking.

comment by Rubi · 2021-12-29T02:54:26.483Z · EA(p) · GW(p)

Priors should matter! For example, early rationalists were (rightfully) criticized for being too open to arguments from white nationalists,  believing they should only look at the argument itself rather than the source. It isn't good epistemics to ignore the source of an argument and their potential biases (though it isn't good epistemics to dismiss them out of hand either based on that, of course).

Replies from: anonymousEA
comment by anonymousEA · 2021-12-29T11:48:47.324Z · EA(p) · GW(p)

I don't see a dichotomy between "ignoring the source of an argument and their potential biases" and downvoting a multi-paragraph comment on the grounds that it used less-than-charitable language about Silicone Valley billionaires.

Based on your final line I'm not sure we disagree?

comment by quinn · 2021-12-28T20:52:58.133Z · EA(p) · GW(p)

I think it's plausible that it's hard to notice this issue if your personal aesthetic preferences happen to be aligned with TUA. I tried to write here [EA(p) · GW(p)] a little questioning how important aesthetic preferences may be. I think it's plausible that people can unite around negative goals even if positive goals would divide them, for instance, but I'm not convinced. 

comment by Michael_Wiebe · 2022-01-24T22:05:47.509Z · EA(p) · GW(p)

>the idea of [...] the NTI framework [has] been wholesale adopted despite almost no underpinning peer-review research.

I argue [EA · GW] that the importance-tractability-crowdedness framework is equivalent to maximizing utility subject to a budget constraint.

comment by John G. Halstead (Halstead) · 2021-12-29T16:30:03.006Z · EA(p) · GW(p)

Re the undue influence of TUA on policy, you say 

"An obvious retort here would be that these are scholars, not decision-makers, that any claim of elitism is less relevant if it refers to simple intellectual exploration. This is not the case. Scholars of existential risk, especially those related to the TUA, are rapidly and intentionally growing in influence. To name only one example noted earlier, scholars in the field have already had “existential risks” referenced in a vision-setting report of the UN Secretary General. Toby Ord has been referenced, alongside existential risks, by UK Prime Minister Boris Johnson. Dedicated think-tanks such as the Centre for Long-Term Resilience have been channelling policy advice from prominent existential risk scholars into the UK government"

The impression I get from this passage is that it is illegitimate or wrong for TUA proponents to have influence eg via the Center for Long-Term Resilience. Luke Kemp has also provided extensive advice and recommendations to this think tank (as have I). Is the thought that it is legitimate for Luke Kemp to try to have such policy leverage, but not legitimate for proponents of TUA to do so? If so, why is that?

Replies from: Khorton, Davidmanheim
comment by Kirsten (Khorton) · 2021-12-29T16:42:30.655Z · EA(p) · GW(p)

"Toby Ord shouldn't seek to influence policy" is not the message I get from that paragraph, fwiw.

It comes across to me as "Toby Ord and other techno-optimists already have policy influence [and so it's especially important for people who care about the long-term future to fund researchers from other viewpoints as well]."

I'm obviously not the authors; maybe they did mean to say that you and Toby Ord should stop trying to influence policy. But that wasn't my first impression.

Replies from: Halstead
comment by John G. Halstead (Halstead) · 2021-12-29T19:47:57.600Z · EA(p) · GW(p)

That wasn't how I interpreted it but perhaps I am an outlier given the voting on this comment. 

comment by Davidmanheim · 2021-12-29T18:56:42.674Z · EA(p) · GW(p)

I thought it was clear, in context, that the point made was that a minority shouldn't be in charge, especially when ignoring other views. (You've ignored my discussion of this in the past [EA(p) · GW(p)], but I take it you disagree.)

That doesn't mean they shouldn't say anything, just that we should strive for more representative views to be presented alongside theirs - something that Toby and Luke seem to agree with, given what they have suggested in the CTLR report, in this paper, and elsewhere.

comment by MichaelA · 2021-12-29T13:47:42.076Z · EA(p) · GW(p)

In case anyone else was wondering: "TUA" stands for "techno-utopian approach", and is described in the abstract of the paper as "the most influential framework" for existential risk studies.

(Just adding that info since I didn't see it explained in the post or comments. I haven't read the paper or all the comments here and so won't comment in any detail, though fwiw I disagree with some parts of this post.)