Posts

How many lives has the U.S. President's Emergency Plan for AIDS Relief (PEPFAR) saved? 2022-01-18T12:57:32.237Z
Mogensen & MacAskill, 'The paralysis argument' 2021-07-19T14:04:15.801Z
[Future Perfect] How to be a good ancestor 2021-07-02T13:17:15.686Z
Anki deck for "Some key numbers that (almost) every EA should know" 2021-06-29T22:13:09.233Z
Christian Tarsney on future bias and a possible solution to moral fanaticism 2021-05-06T10:39:38.949Z
Spears & Budolfson, 'Repugnant conclusions' 2021-04-04T16:09:30.922Z
AGI Predictions 2020-11-21T12:02:35.158Z
Carl Shulman — Envisioning a world immune to global catastrophic biological risks 2020-10-15T13:19:29.806Z
UK to host human challenge trials for Covid-19 vaccines 2020-09-23T14:45:01.278Z
Yew-Kwang Ng, 'Effective Altruism Despite the Second-best Challenge' 2020-05-09T16:47:36.346Z
Phil Trammell: The case for ignoring the world’s current problems — or how becoming a ‘patient philanthropist’ could allow you to do far more good 2020-03-17T17:00:14.108Z
Good Done Right conference 2020-02-04T13:21:02.903Z
Cotton‐Barratt, Daniel & Sandberg, 'Defence in Depth Against Human Extinction' 2020-01-28T19:24:48.033Z
Announcing the Bentham Prize 2020-01-21T22:23:16.860Z
Pablo_Stafforini's Shortform 2020-01-09T15:10:48.053Z
Dylan Matthews: The case for caring about the year 3000 2019-12-18T01:07:49.958Z
Are comment "disclaimers" necessary? 2019-11-23T22:47:01.414Z
Teruji Thomas, 'The Asymmetry, Uncertainty, and the Long Term' 2019-11-05T20:24:00.445Z
A wealth tax could have unpredictable effects on politics and philanthropy 2019-10-31T13:05:28.421Z
Schubert, Caviola & Faber, 'The Psychology of Existential Risk' 2019-10-22T12:41:53.542Z
How this year’s winners of the Nobel Prize in Economics influenced GiveWell’s work 2019-10-19T02:56:46.480Z
A bunch of new GPI papers 2019-09-25T13:32:37.768Z
Andreas Mogensen's "Maximal Cluelessness" 2019-09-25T11:18:35.651Z
'Crucial Considerations and Wise Philanthropy', by Nick Bostrom 2017-03-17T06:48:47.986Z
Effective Altruism Blogs 2014-11-28T17:26:05.861Z
The Economist on "extreme altruism" 2014-09-18T19:53:52.287Z
Effective altruism quotes 2014-09-17T06:47:27.140Z

Comments

Comment by Pablo (Pablo_Stafforini) on How many lives has the U.S. President's Emergency Plan for AIDS Relief (PEPFAR) saved? · 2022-01-18T13:45:06.450Z · EA · GW

Thanks! Coincidentally, I also found Dylan's article (as well as another study from 2015) and added an answer based on it, before seeing yours.

EDIT: Oh, I now see that you were linking to an earlier piece by Dylan from mid-2015, also published in Vox. The article in my answer is from late 2018.

Comment by Pablo (Pablo_Stafforini) on How many lives has the U.S. President's Emergency Plan for AIDS Relief (PEPFAR) saved? · 2022-01-18T13:43:41.091Z · EA · GW

Since writing the question, I found this study estimating the impact of PEPFAR during its first decade. It concludes that the program resulted in 11,560,114 life-years gained (p. 3). Rough linear extrapolation from the chart on p. 5 (though note that growth was superlinear in the reference period) would suggest an additional 25 million or so life-years were gained between 2014 and 2021, vindicating the "tens of millions of life-years" Open Phil estimate.

Dylan Matthews points to another study finding 1.2 million deaths averted by PEPFAR by 2007. Matthews points out that naive extrapolation from this finding would suggest 6 million deaths saved by the end of 2018 (when the Vox article was written), also noting that the true figure is probably higher because of the study's focus on a limited number of partner countries and because of significant growth in PEPFAR's funding over time. In drawing this inference, Matthews links to a webpage on the PEPFAR's website. The link now redirects to PEPFAR's homepage, but a Wayback Machine search reveals the contents of the original URL: a press release announcing that "Latest Results Show PEPFAR Has Saved Over 17 Million Lives". However, again no source is cited for the estimate. The press release does mention a report, but the archived report I found had no explanation for its estimates of lives saved.

Comment by Pablo_Stafforini on [deleted post] 2022-01-17T13:53:22.686Z

Should we split this entry and have separate articles for each of the two organizations?

Comment by Pablo (Pablo_Stafforini) on [linkpost] Peter Singer: The Hinge of History · 2022-01-16T13:32:52.052Z · EA · GW

I agree with your assessment. It is interesting to note that Singer's comments are in response to Holden, who used to hold a similar view but no longer does (I believe).

The other part I found surprising was Singer's comparison of longtermism with past harmful ideologies. At least in principle, I do think that, when evaluating moral views, we should take into consideration not only the contents of those views but also the consequences of publicizing them. But:

  1. These two types of evaluation should be clearly distinguished and done separately, both for conceptual clarity and because they may require different responses. If the problem with a view is not that it is false but that it is dangerous, the appropriate response is probably not to reject the view, but to instead be strategic about how one discusses it publicly (e.g. give preference to less public contexts, frame the discussion in ways that reduce the view's dangers, etc.)
  2. As Richard Chappell pointed out recently, if one is going to consider the consequences of publicizing a view when evaluating it, one should also consider the consequences of publicizing objections to that view. And it seems like objections of the form "we should reject X because publicizing X will have bad consequences" have often had bad consequences historically.
  3. The moral evaluation of the consequences expected to result from public discussion of a view should not beg the question against the view under consideration! Longtermists believe that people in the future, no matter how removed from us, are moral patients whom we should help. So in evaluating longtermism, one cannot ignore that, from a longtermist perspective, publicly demonizing this view—by comparing it to the Third Reich, Soviet communism, or white supremacy—will likely have very bad consequences (e.g. by making society less willing to help far-future people). (Note that this is very different from the usual arguments for utilitarianism being self-effacing: those arguments purport to establish that publicizing utilitarianism has bad consequences, as evaluated by utilitarianism itself. Here, by contrast, a non-longtermist moral standard is assumed when evaluating the consequences of publicizing longtermism.)
  4. Picking reference classes is tricky. Perhaps it's plausible to put longtermism in the reference class of "utopian ideology with considerable abuse potential". But it also seems plausible to put longtermism in the reference class of "enlightened worldview that seeks to expand the circle of moral concern" (cf. Holden's "Radical empathy"). In considering the consequences of publicizing longtermism, it seems objectionable to highlight one reference class, which suggests bad consequences, and ignore the other reference class, which suggests good consequences.
Comment by Pablo (Pablo_Stafforini) on What questions relevant to EA could be answered by surveying the public? · 2022-01-14T23:46:59.442Z · EA · GW

I think that both moral uncertainty and non-moral epistemic uncertainty (if you'll allow the distinction) both suggest we should assign some weight to what people say is valuable.

Only ~7% of all people who ever lived are currently alive. What's the justification for focusing on humans living in 2022? Is it just that figuring out the values of past generations is less tractable?

Comment by Pablo (Pablo_Stafforini) on Why I'm concerned about Giving Green · 2022-01-11T02:28:39.587Z · EA · GW

Giving Green no longer recommends TSM, although the reasons prompting the withdrawal of the recommendation appear to be unrelated to the incidents described above:

we have concerns about Sunrise’s need for additional funding and its lack of clear strategy beyond 2021. Sunrise’s budget grew explosively from just $50,000 in 2017 to $15 million in 2020 and 2021. This kind of rapid growth can strain any organization, and it appears that Sunrise is no different, as 2021 was a year of internal friction in the Movement. Also aside from some advocacy work on climate legislation this fall, we did not see Sunrise engaging in the kinds of mass organizing and mobilizing activities that we anticipated from them. Further, we have yet to see Sunrise’s strategy going forward, so it is unclear how Sunrise plans to adapt, grow, and absorb additional funding in the future.

In sum, Sunrise has helped propel climate to the forefront of American politics, but its future is unclear. Based on Sunrise’s prior record of success and our model of cost-effectiveness, we are optimistic that they have the potential to drive political changes that lead to more ambitious US federal legislation on climate. However, we are concerned by their rapid growth, internal discord, and lack of clear strategy for the future. While we are hopeful that Sunrise will address these challenges through its current strategy discussion and move forward stronger, we will not know the outcome of this process until at least Q1 of 2022. Because we are unsure of the Sunrise Movement’s future plans, we have decided not to recommend the Sunrise Movement Education Fund as a top charity for the 2021 Giving Season. When we can better assess its recent impact and its future strategy, we look forward to reviewing the Sunrise Movement Education Fund again.

Comment by Pablo_Stafforini on [deleted post] 2022-01-10T22:14:01.295Z

I like having a single "external praise" tag rather than three "praise" tags corresponding to the three "criticism" tags, for the reasons you note.

Comment by Pablo (Pablo_Stafforini) on Rowing and Steering the Effective Altruism Movement · 2022-01-09T20:48:43.134Z · EA · GW

This was probably the intended link.

Comment by Pablo (Pablo_Stafforini) on EA Forum feature suggestion thread · 2022-01-08T23:35:36.900Z · EA · GW

and allows for completed changes to be hidden

Having an option to "resolve" a comment thread (analogous to "closing" a GitHub issue) would be very useful, especially for Wiki comments.

Comment by Pablo_Stafforini on [deleted post] 2022-01-08T22:30:43.964Z

Just saw this—added.

Comment by Pablo_Stafforini on [deleted post] 2022-01-08T15:51:26.562Z

I see.

My overall sense is that the scope of the entry is insufficiently crisp, and that it's probably better to discuss these topics under other entries. For instance, there has been considerable discussion about the degree to which charities or causes differ in cost-effectiveness, so I would say that the question of whether the EA community should try to persuade people to switch causes rather than to improve their effectiveness within the cause they already support should be addressed as part of that discussion, in the cost-effectiveness entry.  Some of this could also be discussed under effective altruism messaging. Another relevant existing entry is local priorities research.

Comment by Pablo (Pablo_Stafforini) on EA Forum feature suggestion thread · 2022-01-08T15:28:49.052Z · EA · GW

Also seconded.

In the meantime, you can get pseudo dark mode with the dark reader extension.

Comment by Pablo (Pablo_Stafforini) on Civilization Re-Emerging After a Catastrophic Collapse · 2022-01-08T14:39:28.714Z · EA · GW

It would be great if this talk was transcribed.

Comment by Pablo_Stafforini on [deleted post] 2022-01-07T13:01:25.640Z

Re-reading this exchange, I'd like to add that it may be worth discussing those externalities in other Wiki articles, such as dietary change.

Comment by Pablo (Pablo_Stafforini) on [Linkpost] Eric Schwitzgebel: Against Longtermism · 2022-01-07T00:12:25.714Z · EA · GW

Your reply to Eric's fourth objection makes an important point that I haven't seen mentioned before:

By contrast, I think there's a much more credible risk that defenders of conventional morality may use dismissive rhetoric about "grandiose fantasies" (etc.) to discourage other conventional thinkers from taking longtermism and existential risks as seriously as they ought, on the merits, to take them.  (I don't accuse Schwitzgebel, in particular, of this.  He grants that most people unduly neglect the importance of existential risk reduction.  But I do find that this kind of rhetoric is troublingly common amongst critics of longtermism, and I don't think it's warranted or helpful in any way.)

A view, of course, can be true even if defending it in public is expected to have bad consequences. But if we are going to consider the consequences of publicly defending a view in our evaluation of it, it seems we should also consider the consequences of publicly objecting to that view when evaluating those objections. 

Comment by Pablo (Pablo_Stafforini) on Ben Garfinkel: How sure are we about this AI stuff? · 2022-01-06T12:50:50.480Z · EA · GW

I think this talk, as well as Ben's subsequent comments on the 80k podast, serve as a good illustration of the importance of being clear, precise, and explicit when evaluating causes, especially those often supported by relatively vague analogies or arguments with unstated premises. I don't recall how my views about the seriousness of AI safety as a cause area changed in response to watching this, but I do remember feeling that I had a better understanding of the relevant considerations and that I was in a better position to make an informed assessment.

Comment by Pablo (Pablo_Stafforini) on Reducing long-term risks from malevolent actors · 2022-01-06T12:35:24.087Z · EA · GW

I'm surprised to see that this post hasn't yet been reviewed. In my opinion, it embodies many of the attributes I like to see in EA reports, including reasoning transparency, intellectual rigor, good scholarship, and focus on an important and neglected topic.

Comment by Pablo (Pablo_Stafforini) on Have you considered switching countries to save money? · 2022-01-06T12:11:53.043Z · EA · GW

I know Uruguay well (my father used to own a house near Colonia del Sacramento) and I would agree with your assessment. Montevideo is about two hours away from Buenos Aires by ferry, so an additional advantage is relative proximity to a major metropolis. 

Comment by Pablo (Pablo_Stafforini) on Have you considered switching countries to save money? · 2022-01-06T12:08:35.283Z · EA · GW

Many years ago, when personal finance considerations weighed more heavily on EA decisions, there was an attempt to establish a "new EA hub" in a country with low cost of living and some other characteristics. Maybe the associated Facebook group still exists. I recall there were also spreadsheets, Trello boards, etc. with detailed comparisons of the different options. Posting this here in case others have links to some of that material.

Comment by Pablo_Stafforini on [deleted post] 2022-01-05T22:35:16.172Z

Perhaps we could have an entry on the demandingness of EA, though. That is, insofar as one thinks there is a requirement to engage in effective altruism, how stringent is this requirement?

What do you think of this proposal?

Comment by Pablo_Stafforini on [deleted post] 2022-01-05T22:29:21.836Z

Thanks for creating this entry. I'm not sure having a dedicated article on 'more good vs. most good' is justified, however. EA places great emphasis on doing the most good with a given unit of resources, but is not typically understood to require people to allocate their resources so as to do the most good. This debate seems rather to be one in moral philosophy, covered under demandingness of morality. Insofar as it comes up in EA discussion, it seems to be discussed mostly in the context of excited vs. obligatory altruism.

Comment by Pablo (Pablo_Stafforini) on [Feature Announcement] Rich Text Editor Footnotes · 2022-01-05T01:46:56.774Z · EA · GW

This is fantastic! I'm excited that the Wiki will gradually replace the static and distracting inline citations with dynamic and hoverable footnotes.

One thing I noticed is that, when there are multiple footnote references pointing to the same footnote, the link that takes you back to the footnote reference always points to its first occurrence. Although this is not very important, I think that, ideally, there should be separate links for each footnote reference.

For illustration, compare the Wiki entry on Cari Tuna (which I just edited so that it uses footnotes rather than inline citations) with the corresponding Wikipedia entry. Both entries cite the article by Ariana Eunjung Cha multiple times, but only the footnote in the Wikipedia entry has separate links to each of the associated footnote references. (Incidentally, I think that the Wikipedia functionality could be improved by displaying a tooltip with the sentence immediately preceding the associated footnote reference when the user hovers over the link to that reference. This would make it easier to identify the relevant link and resume reading from the correct location.)

Comment by Pablo (Pablo_Stafforini) on Do you use the EA Wiki? · 2022-01-04T18:12:56.518Z · EA · GW

Yes, agreed. One model would be for tags to have content that corresponds to a glossary rather than an encyclopedia. In this model, each tag would be associated with a concise definition or description of the tag, spanning 1–3 sentences, and perhaps also a short list of references for further reading. Then there would also be SEP-style encyclopedia articles corresponding to some of these tags, but it's unclear how the two should be integrated, or whether they should be integrated at all.

Comment by Pablo (Pablo_Stafforini) on Do you use the EA Wiki? · 2022-01-04T16:03:28.036Z · EA · GW

Not an answer to your question, but it may answer or address some of your underlying questions or concerns:

EA Funds recently extended funding for my work on the EA Wiki, but the current plan is to focus more on experimentation and less on content creation. Currently, I'm exploring the possibility of launching a more ambitious encyclopedia of effective altruism following roughly the model of the Stanford Encyclopedia of Philosophy, with authoritative, comprehensive, and up-to-date articles on core EA concepts and topics commissioned to experts from both academia and the broader EA community. It is unclear, at this stage, whether the Wiki should be kept alongside this other reference work, or whether it should be integrated with it in some form. I would greatly appreciate critical or constructive feedback on this idea, either as a comment to this answer or sent to me privately at firstname@lastname.com (or anonymously).

Comment by Pablo (Pablo_Stafforini) on Nathan Young's Shortform · 2022-01-04T14:03:35.827Z · EA · GW

I do think that many of the entries are rather superficial, because so far we've been prioritizing breadth over depth. You are welcome to try to make some of these entries more substantive. I can't tell, in the abstract, if I agree with your approach to resolving the tradeoff between having more content and having a greater fraction of content reflect just someone's opinion. Maybe you can try editing a few articles and see if it attracts any feedback, via comments or karma?

Comment by Pablo (Pablo_Stafforini) on Democratising Risk - or how EA deals with critics · 2022-01-01T13:08:35.238Z · EA · GW

Thanks for the comments. They have helped me clarify my thoughts, though I feel I'm still somewhat confused.

However, I'll note that establishing a rule like "we won't look at claims seriously if the person making them has a personal vendetta against us" could lead to people trying to argue against examining someone's claims by arguing that they have a personal vendetta, which gets weird and messy. ("This person told me they were sad after org X rejected their job application, so I'm not going to take their argument against org X's work very seriously.")

Yes, I agree that this is a concern. I am reminded of an observation by Nick Bostrom:

consider the convention against the use of ad hominem arguments in science and many other arenas of disciplined discussion. The nominal justification for this rule is that the validity of a scientific claim is independent of the personal attributes of the person or the group who puts it forward. Construed as a narrow point about logic, this comment about ad hominem arguments is obviously correct. But it overlooks the epistemic significance of heuristics that rely on information about how something was said and by whom in order to evaluate the credibility of a statement. In reality, no scientist adopts or rejects scientific assertions solely on the basis of an independent examination of the primary evidence. Cumulative scientific progress is possible only because scientists take on trust statements made by other scientists—statements encountered in textbooks, journal articles, and informal conversations around the coffee machine. In deciding whether to trust such statements, an assessment has to be made of the reliability of the source. Clues about source reliability come in many forms—including information about factors, such as funding sources, peer esteem, academic affiliation, career incentives, and personal attributes, such as honesty, expertise, cognitive ability, and possible ideological biases. Taking that kind of information into account when evaluating the plausibility of a scientific hypothesis need involve no error of logic.

Why is it, then, that restrictions on the use of the ad hominem command such wide support? Why should arguments that highlight potentially relevant information be singled out for suspicion? I would suggest that this is because experience has demonstrated the potential for abuse. For reasons that may have to do with human psychology, discourses that tolerate the unrestricted use of ad hominem arguments manifest an enhanced tendency to degenerate into personal feuds in which the spirit of collaborative, reasoned inquiry is quickly extinguished. Ad hominem arguments bring out our inner Neanderthal.

So I recognize both that it is sometimes legitimate (and even required) to refuse to engage with arguments based on how they originated, and that a norm that licenses this behavior has significant abuse potential. I haven't thought about ways in which the norm could be refined, or about heuristics one could adopt to decide when to apply it. I'd like to see someone (Greg Lewis?) investigate this issue more.

As for filtered evidence — definitely a concern if you're trying to weigh the totality of evidence for or against something. But not necessarily relevant if there's one specific piece of evidence that would be damning if true.

I mostly agree. My sense is that we often misclassify as "specific piece[s] of evidence that would be damning if true" things that should be assessed as part of a much larger whole. E.g. it is sometimes relevant to consider the sheer number of things someone has said when deciding how outraged to be that this person said something seemingly outrageous.

Comment by Pablo (Pablo_Stafforini) on An Issue with the Repugnant Conclusion · 2021-12-31T19:45:25.205Z · EA · GW

The crux I think lies in, "is not meant to be sensitive to how resources are allocated or how resources convert to wellbeing." I guess the point established here is that it is, in fact, sensitive to these parameters.

In particular if one takes this 'total utility' approach of adding up everyone's individual utility we have to ask what each individual's utility is a function of.

Yes, that is a question that needs to be answered, but population ethics is not an attempt to answer it. This subdiscipline treats distributions of wellbeing across individuals in different hypothetical worlds as a given input, and seeks to find a function that outputs a plausible ranking of those worlds. The Repugnant Conclusion arises because some of those functions produce rankings that seem intuitively very implausible.

The claim is not that the worlds so ranked are likely to arise in practice. Rather, the claim is that, intuitively, a theory should never generate those rankings. This is how philosophers generally assess moral theories: they construct thought experiments intended to elicit an intuitive response, and they contrast this response with the implications of the theory. For example, in one of the trolley thought experiments, it seems intuitively wrong (to most humans, at least) to push a big man to stop the trolley, but this is what utilitarianism apparently says we should do (kill one to save five). It is of no consequence, in this context, that we will almost certainly never face such a dilemma in practice.

Comment by Pablo (Pablo_Stafforini) on An Issue with the Repugnant Conclusion · 2021-12-31T15:50:58.418Z · EA · GW

Thanks for the clarification. My intention was not to dismiss your proposal, but to understand it better.

After reading your comment and re-reading your post, I understand you to be claiming that the Repugnant Conclusion follows only if the mapping of resources to wellbeing takes a particular form, which can't be taken for granted. I agree that this is substantively different from the proposals in the section of the SEP article, so the difference is not verbal, contrary to what it seemed to me initially.

However, I don't think this works as a reply to the Repugnant Conclusion, which is a thought experiment intended to test our moral intuitions about how the wellbeing of different people should be aggregated to determine the value of worlds, and is not meant to be sensitive to how resources are allocated or how resources convert to wellbeing. That is, the Repugnant Conclusion stipulates that individual wellbeing is very high in the low population world and slightly above neutrality in the high population world, and combinations of resources and utility functions incompatible with those wellbeing levels are ruled out by stipulation.

Apologies if this is not a correct interpretation of your proposal.

Comment by Pablo (Pablo_Stafforini) on Where are you donating in 2021, and why? · 2021-12-31T13:58:29.415Z · EA · GW

An alternative is to donate your time rather than your money, and use it to do the kind of work you would have funded, had this been an option. With most interventions, this isn't possible or realistic, but Wikipedia is the Free Encyclopedia that Anyone Can Edit.

Comment by Pablo (Pablo_Stafforini) on An Issue with the Repugnant Conclusion · 2021-12-31T13:29:20.421Z · EA · GW

This criticism is most similar to that of the ‘Variable value principles’ of the Plato article. The difference here is that we are not trying to find a ‘modification’ of total utilitarianism. Instead we argue that the Conclusion doesn’t follow from the premises in the general case, even if we are total utilitarians.

Superficially, the difference seems merely verbal: what they call a modification of total utilitarianism, you call a version of total utilitarianism. Is there anything substantive at stake?

Comment by Pablo (Pablo_Stafforini) on Democratising Risk - or how EA deals with critics · 2021-12-31T00:52:48.273Z · EA · GW

I agree that there is a relevant difference, and I appreciate your pointing it out. However, I also think that knowledge of the origins of a claim or an argument is sometimes relevant for deciding whether one should engage seriously with it, or engage with it at all, even if the person presenting it is not himself/herself acting in bad faith. For example, if I know that the oil or the tobacco industries funded studies seeking to show that global warming is not anthropogenic or that smoking doesn't cause cancer, I think it's reasonable to be skeptical  even if the claims or arguments contained in those studies are presented by a person unaffiliated with those industries. One reason is that the studies may consist of filtered evidence—that is, evidence selected to demonstrate a particular conclusion, rather than to find the truth. Another reason is that by treating arguments skeptically when they originate in a non-truth-seeking process, one disincentivizes that kind of intellectually dishonest and socially harmful behavior.

In the case at hand, I think what's going on is pretty clear. A person who became deeply hostile to longtermism (for reasons that look prima facie mostly unrelated to the intellectual merits of those views) diligently went through most of the longtermist literature fishing for claims that would, if presented in isolation to a popular audience using technically true but highly tendentious or misleading language and/or stripped of the relevant context, cause serious damage to the longtermist movement. In light of this, I think it is not only naive but epistemically unjustified to insist that this person's findings be assessed on their merits alone. (Again, consider what your attitude would be if the claims originated e.g. in an industry lobbyist.)

In addition, I think that it's inappropriate to publicize this person's writings, by including them in a syllabus or by reproducing their cherry-picked quotes. In the case of Nick Beckstead's quote, in particular, its reproduction seems especially egregious, because it helps promote an image of someone diametrically opposed to the truth: an early Giving What We Can Member who pledged to donate 50% of his income to global poverty charities for the rest of his life is presented—from a single paragraph excerpted from a 180-page doctoral dissertation intended to be read primarily by an audience of professional analytic philosophers—as "support[ing] white supremacist ideology". Furthermore, even if Nick was just an ordinary guy rather than having impeccable cosmopolitan credentials, I think it would be perfectly appropriate to write what he did in the context of a thesis advancing the argument that our moral judgments are less reliable than is generally assumed. More generally, and more importantly, I believe that as EAs we should be willing to question established beliefs related to the cost-effectiveness of any cause, even if this risks reaching very uncomfortable conclusions, as long as the questioning is done as part of a good-faith effort in cause-prioritization and subject to the usual caveats related to possible reputational damage or the spreading of information hazards. It scares me to think what our movement might become if it became an accepted norm that explorations of the sort exemplified by the quote can only be carried out "through a postcolonial lens".

Note: Although I generally oppose disclaimers, I will add one here. I've known Nick Beckstead for a decade or so. We interacted a bit back when he was working at FHI, though after he moved to Open Phil in 2014 we had no further communication, other than exchanging greetings when he visited the CEA office around 2016 and corresponding briefly in a professional capacity. I am also an FTX Fellow, and as I learned recently, Nick has been appointed CEO of the FTX Foundation. However, I made this same criticism ten months ago, way before I developed any ties to FTX (or had any expectations that I would develop such ties or that Nick was being considered for a senior position). Here's what I wrote back then:

I personally do not think it is appropriate to include an essay in a syllabus or engage with it in a forum post when (1) this essay characterizes the views it argues against using terms like 'white supremacy' and in a way that suggests (without explicitly asserting it, to retain plausible deniability) that their proponents—including eminently sensible and reasonable people such as Nick Beckstead and others— are white supremacists, and when (2) its author has shown repeatedly in previous publications, social media posts and other behavior that he is not writing in good faith and that he is unwilling to engage in honest discussion.

Comment by Pablo (Pablo_Stafforini) on Propose and vote on potential EA Wiki entries · 2021-12-30T13:56:21.848Z · EA · GW

Here's the entry. I was only able to read the transcript of Paul's talk and Rohin's summary of it, so feel free to add anything you think is missing.

Comment by Pablo (Pablo_Stafforini) on Propose and vote on potential EA Wiki entries · 2021-12-30T11:46:12.831Z · EA · GW

Thanks, Michael. This is a good idea; I will create the entry.

(I just noticed you left other comments to which I didn't respond; I'll do so shortly.)

Comment by Pablo (Pablo_Stafforini) on Democratising Risk - or how EA deals with critics · 2021-12-30T01:48:16.574Z · EA · GW

This seems like a fruitful area of research—I would like to see more exploration of this topic. I don't think I have anything interesting to say off the top of my head.

Comment by Pablo (Pablo_Stafforini) on Democratising Risk - or how EA deals with critics · 2021-12-29T22:57:57.188Z · EA · GW

The longtermist could then argue that an analogous argument applies to "other-defence" of future generations. (In case there was any need to clarify: I am not making this argument, but I am also not making the argument that violence should be used to prevent nonhuman animals from being tortured.)

Separately, note that a similar objection also applies to many forms of non-totalist longtermism. On broad person-affecting views, for instance, the future likely contains an enormous number of future moral patients who will suffer greatly unless we do something about it. So these views could also be objected to on the grounds that they might lead people to cause serious harm in an attempt to prevent that suffering.

In general, I think it would be very helpful if critics of totalist longtermism made it clear what rival view in population ethics they themselves endorse (or what distribution of credences over rival views, if they are morally uncertain). The impression one gets from reading many of these critics is that they assume the problems they raise are unique to totalist longtermism, and that alternative views don't have different but comparably serious problems. But this assumption can't be taken for granted, given the known impossibility theorems and other results in population ethics. An argument is needed.

Comment by Pablo (Pablo_Stafforini) on [Linkpost] - Sam Harris and Sam Bankman-Fried - Earning to Give · 2021-12-29T22:34:41.172Z · EA · GW

Tyler Cowen announced that he will soon be interviewing SBF. You can suggest questions here.

Comment by Pablo (Pablo_Stafforini) on Democratising Risk - or how EA deals with critics · 2021-12-29T22:28:44.622Z · EA · GW

Just to clarify (since I now realize my comment was written in a way that may have suggested otherwise): I wasn't alluding to your attempt to steelman his criticism. I agree that at the time the evidence was much less clear, and that steelmanning probably made sense back then (though I don't recall the details well).

Comment by Pablo (Pablo_Stafforini) on Democratising Risk - or how EA deals with critics · 2021-12-29T22:22:21.719Z · EA · GW

Hi Charles. Please consider revising or retracting this comment; unlike your other comments in this thread, it's unkind and not adding to the conversation.

Comment by Pablo (Pablo_Stafforini) on Democratising Risk - or how EA deals with critics · 2021-12-29T21:41:27.312Z · EA · GW

I agree with this, and would add that the appropriate response to arguments made in bad faith is not to "steelman" them (or to add them to a syllabus, or to keep disseminating a cherry-picked quote from a doctoral dissertation), but to expose them for what they are or ignore them altogether. Intellectually dishonesty is the epistemic equivalent of defection in the cooperative enterprise of truth-seeking; to cooperate with defectors is not a sign of virtue, but quite the opposite.

Comment by Pablo (Pablo_Stafforini) on Democratising Risk - or how EA deals with critics · 2021-12-29T13:39:08.070Z · EA · GW

nor it is germane to this discussion

I do think it is germane to the discussion, because it helps to clarify what the authors are claiming and whether they are applying their claims consistently. 

Comment by Pablo (Pablo_Stafforini) on Democratising Risk - or how EA deals with critics · 2021-12-28T19:59:32.614Z · EA · GW

Technology causes problems? Just add more technology!

"it's more nuanced than that".

Comment by Pablo (Pablo_Stafforini) on Who are the most well known credible people who endorse EA? · 2021-12-27T20:55:34.073Z · EA · GW

I second Harrison D's suggestion to create a spreadsheet of endorsements, since such a list might be useful to a number of EAs and EA orgs, beyond the specific task of updating effectivealtruism.org.

Sources that may point you in the right direction:

Comment by Pablo (Pablo_Stafforini) on Effective Altruism: The First Decade (Forum Review) · 2021-12-27T13:29:02.227Z · EA · GW

Okay, that explains why I can't vote on the post by Carl I crossposted. But why can I (and everyone else, presumably) neither review nor vote on the three posts above?

Comment by Pablo (Pablo_Stafforini) on Effective Altruism: The First Decade (Forum Review) · 2021-12-27T01:40:31.293Z · EA · GW

Thanks for crossposting these. It seems that it's not possible to review or vote on some of those posts (specifically, these three posts). Is there an explanation for this? I also noticed I can't vote on this post by Carl Shulman, which I crossposted, though in that case I can write a review.

Comment by Pablo (Pablo_Stafforini) on Evidence, cluelessness, and the long term - Hilary Greaves · 2021-12-24T20:11:23.411Z · EA · GW

Thanks for the reply. Although this doesn't resolve our disagreement, it helps to clarify it.

Comment by Pablo (Pablo_Stafforini) on Response to Recent Criticisms of Longtermism · 2021-12-24T12:18:07.032Z · EA · GW

As I mentioned in a top-level comment on this post, I don't think this is actually true. He never claims so outright.

In one of the articles, he claims that longtermism can be "analys[ed]" (i.e. logically entails) "a moral view closely associated with what philosophers call 'total utilitarianism'." And in his reply to Avital, he writes that "an integral component" of the type of longtermism that he criticized in that article is "total impersonalist utilitarianism". So it looks like the only role the "closely" qualifier plays is to note that the type of total utilitarianism to which he believes longtermism is committed is impersonalist in nature. But the claim is false: longtermism is not committed to total impersonalist utilitarianism, even if one restricts the scope of "longtermism" to the view Torres criticizes in the article, which includes the form of longtermism embraced by MacAskill and Greaves. (I also note that in other writings he drops the qualifier altogether.)

Dr. David Mathers never actually claimed longtermism is committed total utilitarian, he only extended a critique of total utilitarianism to longtermism, which responds to one of the main arguments made for longtermism

I agree (and never claimed otherwise). 

Extending critiques of total utilitarianism to longtermism seems fair to me, even if they don't generalize to all longtermist views.

I'm not sure what exactly you mean by "extending".  If you mean something like, "many longtermist folk accept longtermism because they accept total utilitarianism, so raising objections to total utilitarianism in the context of discussions about longtermism can persuade these people to abandon longtermism", then I agree, but only insofar as those who raise the objections are clear that they are directly objecting to total utilitarianism. Otherwise, this is apt to create the false impression that the objections apply to longtermism as such. In my reply to David, I noted that longtermism is not committed to total utilitarianism precisely to correct for that potential misimpression.

Comment by Pablo (Pablo_Stafforini) on Comments for shorter Cold Takes pieces · 2021-12-24T00:24:54.031Z · EA · GW

Given this simple consideration that cases would have to drop off exceptionally fast at just the right time for Zvi's outcome to happen, I assign a 5% chance to Zvi's outcome happening.

Your analysis roughly matches my independent impression, but I'm pretty sure this simple consideration didn't escape Zvi's attention. So, it seems that you can't so easily jump from that analysis to the conclusion that Holden will win the bet, unless you didn't think much of Zvi as a reasoner to begin with or had a plausible error theory to explain this particular instance.

Comment by Pablo (Pablo_Stafforini) on Response to Recent Criticisms of Longtermism · 2021-12-23T23:53:27.809Z · EA · GW

[I made some edits to make my comment clearer.]

I think this is not a very good way to dismiss the objection, given the views actual longtermists hold and how longtermism looks in practice today (a point Torres makes).

I wouldn't characterise my observation that longtermism isn't committed to total utilitarianism as dismissing the objection. I was simply pointing out something that I thought is both true and important, especially in the context of a thread prompted by a series of articles in which the author assumes such a commitment. The remainder of my comment explained why the objection was weak even ignoring this consideration.

Here are two nontrivial ways in which you may end up accepting longtermism even if you reject the total view. First, if you are a "wide" person-affecting theorist, and you think it's possible to make a nonrandom difference to the welfare of future sentient beings, whom you expect to exist for a sufficiently long time regardless of your actions. (Note that this is true for suffering-focused views as well as for hedonistic views, which is another reason for being clear about the lack of a necessary connection between longtermism and total utilitarianism, since utilitarianism is hedonistic in its canonical form.) Second, if you subscribe to a theory of normative uncertainty on which the reasons provided by the total view end up dominating your all-things-considered normative requirements, even if you assign significant credence to views other than the total view.

Separately, the sociological fact (if it is a fact) that most people who defend longtermism are total utilitarians seems largely irrelevant for assessing the plausibility of longtermism: this depends on the strength of the arguments for that view.

This is fair, although Torres did also in fact engage with the literature a little, but only to support his criticism of longtermism and total utilitarianism, and he didn't engage with criticisms of other views, so it's not at all a fair representation of the debate.

Yeah, by "engage with the literature" I meant doing so in a way that does reasonable justice to it. A climate change skeptic does not "engage with the literature", in the relevant sense, by cherry-picking a few studies in climate science here and there.

I think his comment is directly related to the content of the articles and the OP here, which discuss total utilitarianism, and the critique he's raising is one of the main critiques in one of Torres' pieces. I think this is a good place for this kind of discussion, although a separate post might be good, too, to get into the weeds.

I suggested using a separate thread because I expect that any criticism of longtermism posted here would be met with a certain degree of unwarranted hostility, as it may be associated with the articles to which Avital was responding. Although I am myself a longtermist, I would like to see good criticisms of it, discussed in a calm, nonadversarial manner, and I think this is less likely to happen in this thread.

Comment by Pablo (Pablo_Stafforini) on Evidence, cluelessness, and the long term - Hilary Greaves · 2021-12-23T12:28:58.106Z · EA · GW

A few thoughts:

  • I'm open to the possibility that there are terms better than "cluelessness" to refer to the problem Hilary discusses in her talk. Perhaps we could continue this discussion elsewhere, such as on the 'talk' page of the cluelessness Wiki entry (note that the entry is currently just a stub)?
  • As noted, the term has been used in philosophy for quite some time. So if equivalent or related expressions exist in other disciplines, the question is, "Which of these terms should we settle for?" Whereas you make it seem like using "cluelessness" requires a special justification, relative to the other choices.
  • Since Hilary didn't introduce the term, either in philosophy or in EA, it seems inappropriate to evaluate her talk negatively, even granting that it would have been desirable if a term other than "cluelessness" had become established.
  • Separately, I think Hilary's talk is a valuable contribution to the problem, so I don't think it warrants a negative evaluation. (But maybe you disagree and your views about the substance of the talk also influenced your assessment? In your follow-up comment, you say that the problem "has reasonable solutions", though I am personally not aware of any such solution.)
Comment by Pablo (Pablo_Stafforini) on Evidence, cluelessness, and the long term - Hilary Greaves · 2021-12-22T23:18:44.654Z · EA · GW

Unfortunately this author has had the bad luck that her new terminology stuck. And it stuck pretty hard.

The term "cluelessness" has been used in the philosophical literature for decades, to refer to the specific and well-defined problem faced by consequentialism and other moral theories which take future consequences into account. Greaves's talk is a contribution to that literature. She wasn't even the first to use the term in EA contexts; I believe Amanda Askell and probably other EAs were discussing cluelessness years before this talk.