Posts

Spears & Budolfson, 'Repugnant conclusions' 2021-04-04T16:09:30.922Z
AGI Predictions 2020-11-21T12:02:35.158Z
UK to host human challenge trials for Covid-19 vaccines 2020-09-23T14:45:01.278Z
Yew-Kwang Ng, 'Effective Altruism Despite the Second-best Challenge' 2020-05-09T16:47:36.346Z
Phil Trammell: The case for ignoring the world’s current problems — or how becoming a ‘patient philanthropist’ could allow you to do far more good 2020-03-17T17:00:14.108Z
Good Done Right conference 2020-02-04T13:21:02.903Z
Cotton‐Barratt, Daniel & Sandberg, 'Defence in Depth Against Human Extinction' 2020-01-28T19:24:48.033Z
Announcing the Bentham Prize 2020-01-21T22:23:16.860Z
Pablo_Stafforini's Shortform 2020-01-09T15:10:48.053Z
Dylan Matthews: The case for caring about the year 3000 2019-12-18T01:07:49.958Z
Are comment "disclaimers" necessary? 2019-11-23T22:47:01.414Z
Teruji Thomas, 'The Asymmetry, Uncertainty, and the Long Term' 2019-11-05T20:24:00.445Z
A wealth tax could have unpredictable effects on politics and philanthropy 2019-10-31T13:05:28.421Z
Schubert, Caviola & Faber, 'The Psychology of Existential Risk' 2019-10-22T12:41:53.542Z
How this year’s winners of the Nobel Prize in Economics influenced GiveWell’s work 2019-10-19T02:56:46.480Z
A bunch of new GPI papers 2019-09-25T13:32:37.768Z
Andreas Mogensen's "Maximal Cluelessness" 2019-09-25T11:18:35.651Z
'Crucial Considerations and Wise Philanthropy', by Nick Bostrom 2017-03-17T06:48:47.986Z
Effective Altruism Blogs 2014-11-28T17:26:05.861Z
The Economist on "extreme altruism" 2014-09-18T19:53:52.287Z
Effective altruism quotes 2014-09-17T06:47:27.140Z

Comments

Comment by Pablo_Stafforini on [deleted post] 2021-05-05T21:14:20.905Z

Thanks for flagging this. I listed 'CRS' under 'related entries' in the 'invertebrate welfare' entry only because I had previously listed 'invertebrate welfare' under the corresponding section of the 'CRS' entry. The basis for the latter was that they write this in their 'about us' page:

The Center for Reducing Suffering (CRS) is a research center that works to create a future with less suffering, with a focus on reducing the most intense suffering.

We believe suffering matters equally regardless of who experiences it, which implies that we should consider the suffering of all sentient beings in our efforts. This includes wild animals, and possibly also invertebrates or even artificial beings.

I haven't studied their research closely. If they haven't done substantive research on invertebrate welfare, the link to that entry should be removed. Moreover, even if the link should be preserved, it's an open question whether the reciprocal link should exist; 'entry relatedness', as used here, doesn't seem to be  a symmetric relation (we are using 'related' in the sense of sufficiently intuitively related, and it doesn't follow that, if x is intuitively related to y  to some degree, is intuitively related to x to that same degree; hence it is not the case that the sufficiency condition is satisfied in one case iff it's satisfied in the other). To complicate things, I haven't followed this process for most other orgs, so adding 'CRS' under 'related entries' in the 'invertebrate welfare' entry makes that organization stand out in a way that it wouldn't if all the other related entries were already listed. In any case, since CRS is not primarily doing research on invertebrate welfare, I think we can already establish that it shouldn't be listed there; if you think 'invertebrate welfare' should also be removed from 'CRS', feel free to do so—otherwise I'll make a decision once I have time to look at their research more closely.

I'll respond to the rest of your comment later. 

Comment by Pablo_Stafforini on [deleted post] 2021-05-05T17:17:47.138Z

Thanks. Okay, I'm not sure I agree we should have a separate entry, but I'm happy to defer to your view.

I think the more general tag should be called 'building effective altruism', given that this is how 80k calls it (and that they have considered alternative names in the past and later rejected them), so it seems a suitable name for the narrower tag is 'growing effective altruism'. However, I think this will create confusion, since the difference between 'building' and 'growing' is not immediately clear. I really don't like 'Movement growth debate': there are lots of debates within EA and we typically cover those by having articles on the topic that is the subject of the debate, not articles on that topic followed by the word 'debate'. So if we are going to keep the article, I think we should try to find an alternative name for it.

Comment by Pablo (Pablo_Stafforini) on Propose and vote on potential tags · 2021-05-05T16:45:39.718Z · EA · GW

I did this systematically for all the relevant wikis I was aware of, back when I started working on this project in mid 2020. Of course, it's likely that I have missed some relevant entries or references.

Comment by Pablo_Stafforini on [deleted post] 2021-05-05T16:22:33.123Z

Or to make the discussion less abstract we could consider a concrete post that you feel is a fit candidate for the "growth debate" tag but not so much for the "building/promoting" tag. I may not have a clear enough idea of the types of debate you have in mind.

Comment by Pablo_Stafforini on [deleted post] 2021-05-05T16:10:29.952Z

I don't have a preference between 'promoting' and 'building'. 80k originally had a page called promoting effective altruism which they later replaced with one called building effective altruism.

Your objection above was that 'movement growth' was specifically about how much EA should grow, but 'promoting effective altruism' considers growth (as well as other forms of promotion) as an intervention or cause area, so it seems like a natural place to address that normative question, i.e. how much efforts to promote or build EA should focus on growth vs. e.g. quality of outreach. Am I misunderstanding your objection?

Comment by Pablo_Stafforini on [deleted post] 2021-05-05T15:12:16.234Z

Ah, I see. I hadn't considered that distinction. In that case, how about merging this article and promoting effective altruism

Comment by Pablo_Stafforini on [deleted post] 2021-05-05T14:59:02.918Z

I agree the scope should probably be broadened. I think the things you list can be discussed in an article with the current name, so my inclination would be to just change the description. Though if you can think of a better title, feel free to change it, too.

Comment by Pablo (Pablo_Stafforini) on Propose and vote on potential tags · 2021-05-05T14:56:41.743Z · EA · GW

I think this would be a valuable article. Perhaps the title could be refined, but at the moment I can't think of any alternatives I like. So feel free to create it, and we can consider possible name variants in the future.

Comment by Pablo (Pablo_Stafforini) on Linch's Shortform · 2021-05-05T12:40:54.770Z · EA · GW

This post may be of interest, in case you haven't seen it already.

Comment by Pablo_Stafforini on [deleted post] 2021-05-05T12:06:01.065Z

I agree. If EA Debate Championship becomes more established in the future, we definitely should have an entry for it, but I think currently it's a bit premature. In any case, kudos for the initiative.

I'll leave this up for a day or two in case anyone has further comments, and if there are no objections I'll then delete the entry. (One undesirable side effect of deleting an entry is that the comments thread is also deleted.)

Comment by Pablo (Pablo_Stafforini) on Five Books of Peter Singer which Changed My Life · 2021-05-04T12:49:36.341Z · EA · GW

I was just about to leave a follow-up comment mentioning that book as among my favorites! (Though I note that the admirable Aaron Swartz appears not to have liked it.)

Comment by Pablo (Pablo_Stafforini) on Five Books of Peter Singer which Changed My Life · 2021-05-04T10:58:06.172Z · EA · GW

I think all these books by Singer are well worth reading! You may also want to check out The Point of View of the Universe, co-written with Katarzyna de Lazari-Radek, published after Practical Ethics, where Singer says he now finds hedonistic utilitarianism more plausible than preference utilitarianism.

Comment by Pablo_Stafforini on [deleted post] 2021-05-03T23:54:06.170Z

Thoughts on what to do with this article? It strikes me as lacking a coherent focus, so I'd be inclined to delete it, perhaps moving parts of it to other related articles.

Comment by Pablo_Stafforini on [deleted post] 2021-05-03T23:49:24.556Z

The post that prompted me to create that tag was Joe Carlsmith's on illusionism. That doesn't quite fall under consciousness research, although I'm also not entirely satisfied with how these two tags carve up the relevant space. So my vote would be against merging but in favor of exploring alternatives.

Comment by Pablo (Pablo_Stafforini) on Propose and vote on potential tags · 2021-05-03T13:53:08.816Z · EA · GW

I agree with having this tag and subsuming epistemic challenge to longtermism under it. We do already have forecasting and AI forecasting, so some further thinking may be needed to avoid overlap.

Comment by Pablo (Pablo_Stafforini) on Propose and vote on potential tags · 2021-05-03T11:20:42.397Z · EA · GW

Great, I like the name.

Comment by Pablo (Pablo_Stafforini) on Propose and vote on potential tags · 2021-05-02T19:54:00.348Z · EA · GW

(Typing from my phone; apologies for any typos.)

Thanks for the reply. There are a bunch of interesting questions I'd like to discuss more in the future, but for the purposes of making a decision on the issue that triggered this thread, on reflection I think it would be valuable to have a discussion of the arguments you describe. The reason I believe this is that existential risk is such a core topic within EA that an article on the different arguments that have been proposed to mitigate these risks is of interest even from a purely sociological or historical perspective. So even if we may not agree on the definition of EA, the relevance of moral uncertainty or other issues, luckily that doesn't turn out to be an obstacle for agreeing on this particular issue.

Perhaps the article should be simply called arguments for existential risk prioritization and cover all the relevant arguments, including longtermist arguments, and we could in addition have a longer discussion of the latter in a separate article, though I don't have strong views on this. (As it happens, I have a document briefly describing about 10 such arguments that I wrote many years ago, which I could send if you are interested. I probably won't be able to work on the article within the next few weeks, though I think I will have time to contribute later.)

Comment by Pablo (Pablo_Stafforini) on Propose and vote on potential tags · 2021-05-02T18:49:37.868Z · EA · GW

I'll respond quickly because I'm pressed with time.

  1. I don't think EA is fuzzy to the degree you seem to imply. I think the core of EA is something like what I described , which corresponds to the Wikipedia definition (a definition which is itself an effort to capture the common features of the many definitions that have been proposed).
  2. I don't understand your point about moral uncertainty. You mention the fact that Will wrote a book about moral uncertainty, or the fact that Beckstead is open to non-consequentialism, as relevant in this context, but I don't see their relevance. EA, in the sense captured by the above Wikipedia definition, is not committed to welfarism, consequentialism, or any other moral view. (Will uses the term 'welfarism', but I don't think he is using it in a moral sense, since he states explicitly that his definition is non-normative.) (ADDED: there is one type of moral uncertainty that is relevant for EA, namely uncertainty about population axiology, because it concerns the class of beings whom EA is committed to helping, at least if we interpret 'others' in "helping others effectively" as "whichever beings count morally". Relatedly, uncertainty about what counts as a person's wellbeing is also relevant, at least if we interpret 'helping' in "helping others effectively" as "improving their wellbeing". So it would be incorrect to say that EA has no moral commitments; still, it is not committed to any particular moral theory.)
  3. I agree it often makes sense to frame our concerns in terms of reasons that make sense to our target audience, but I don't see that as the role of the EA Wiki. Instead, as noted above, one key way in which the EA Wiki can add value is by articulating the distinctively EA perspective on the topic of interest. If I consult a Christian encyclopedia, or a libertarian encyclopedia, I want the entries to describe the reasons Christians and libertarians have for holding the views that they do, rather than the reasons they expect to be most persuasive to their readers.
Comment by Pablo (Pablo_Stafforini) on Ben Garfinkel's Shortform · 2021-05-02T15:36:36.161Z · EA · GW

Another interpretation of the concern, though related to your (3), is that misaligned AI may cause humanity to lose the potential to control its future. This is consistent with humanity not having (and never having had) actual control of its future; it only requires that this potential exists, and that misaligned AI poses a threat to it.

Comment by Pablo (Pablo_Stafforini) on Propose and vote on potential tags · 2021-05-02T15:28:50.994Z · EA · GW

It's not immediately obvious that the EA Wiki should focus solely on considerations relevant from an EA perspective. But after thinking about this for quite some time, I think that's the approach we should take, in part because providing a distillation of those considerations is one of the ways in which the EA Wiki could provide value relative to other reference works, especially on topics that already receive at least some attention in non-EA circles.

Comment by Pablo_Stafforini on [deleted post] 2021-05-02T15:21:00.426Z

Great, I've updated the article with your proposal (I made minor changes; feel free to revise).

Comment by Pablo (Pablo_Stafforini) on Propose and vote on potential tags · 2021-05-02T14:54:57.891Z · EA · GW

The reasons for caring about x-risk that Toby mentions are relevant from many moral perspectives, but I think we shouldn't cover them on the EA Wiki, which should be focused on reasons that are relevant from an EA perspective. Effective altruism is focused on finding the best ways to benefit others (understood as moral patients), and by "short-termist" I mean views that restrict the class of "others" to moral patients currently alive, or whose lives won't be in the distant future. So I think short-termist + long-termist arguments exhaust the arguments relevant from an EA perspective, and therefore think that all the arguments we should cover in an article about non-longtermist arguments  are short-termist arguments.

Comment by Pablo_Stafforini on [deleted post] 2021-05-02T13:23:53.596Z

Have you stumbled upon a definition or characterization of 'civilizational collapse' that we could adapt?

Comment by Pablo_Stafforini on [deleted post] 2021-05-02T13:20:07.253Z

Thanks. I've edited the sentence. In the future, we may want to note explicitly that sometimes, especially in EA circles, 'resilience' is used narrowly to include only humanity's capacity to recover from, rather than to resist, civilizational collapse (or global catastrophes more generally). See footnote 2 in Cotton-Barratt, Daniel & Sandberg 2020.

Comment by Pablo_Stafforini on [deleted post] 2021-05-02T13:12:59.361Z

Fair enough—I removed it.

Comment by Pablo (Pablo_Stafforini) on Propose and vote on potential tags · 2021-05-02T12:42:26.485Z · EA · GW

I like Conceptually, and during my early research I went through their list of concepts one by one, to decide which should be covered by the EA Wiki, though I may have missed some relevant entries. Thoughts on which ones we should include, that aren't already articles or are listed in our list of projected entries?

Comment by Pablo (Pablo_Stafforini) on Propose and vote on potential tags · 2021-05-02T12:36:02.081Z · EA · GW

Definitely. I already was planning to have an entry on whole brain emulation and have some notes on it... wait, I now see the tag already exists. Mmh, it seems we missed it because it was "wiki only". Anyway, I've removed the restriction now. Feel free to paste the 'further reading' and 'related entries' sections (otherwise I'll do it myself; I just didn't want to take credit for your work).

Comment by Pablo (Pablo_Stafforini) on Propose and vote on potential tags · 2021-05-02T12:29:42.154Z · EA · GW

I think this would be a very useful article to have. It seems challenging to find a name for it, though. How about short-termist existential risk prioritization? I am not entirely satisfied with it, but I cannot think of other alternatives I like more. Another option, inspired by the second of your proposals, is short-termist arguments for prioritising existential risk. I think I prefer 'risk prioritization' over 'arguments for prioritizing' because the former allows for discussion of all relevant arguments, not just arguments in favor of prioritizing.

Comment by Pablo (Pablo_Stafforini) on Propose and vote on potential tags · 2021-05-02T12:22:57.005Z · EA · GW

Makes sense. I created it (no content yet).

Comment by Pablo_Stafforini on [deleted post] 2021-05-01T14:54:44.617Z

Thanks. Don't worry about formatting.

Comment by Pablo (Pablo_Stafforini) on Draft report on existential risk from power-seeking AI · 2021-04-30T22:40:20.353Z · EA · GW

From my perspective, the world just looks like the kind of world where "existential catastrophe from misaligned, power-seeking AI by 2070" is true.

Could you clarify what you mean by this? I think I don't understand what the word "true", italicized, is supposed to mean here. Are you just reporting the impression (i.e. a belief not adjusted to account for other people's beliefs) that you are ~100% certain an existential catastrophe from misaligned, power-seeking AI will (by default) occur by 2070? Or are you saying that this is what prima facie seems to you to be the case, when you extrapolate naively from current trends? The former seems very overconfident (even conditional on an existential catastrophe occurring by that date, it is far from certain that it will be caused by misaligned AI), whereas the latter looks pretty uninformative, given that it leaves open the possibility that the estimate will be substantially revised downward after additional considerations are incorporated (and you do note that you think "there's a decent chance of exciting surprises"). Or perhaps you meant neither of these things?

I guess the most helpful thing (at least to someone like me who's trying to make sense of this apparent disagreement between you and Joe) would be for you to state explicitly what probability assignment you think the totality of the evidence warrants (excluding evidence derived from the fact that other reasonable people have beliefs about this), so that one can then judge whether the discrepancy between your estimate and Joe's is so significant that it suggests "some mistake in methodology" on your part or his, rather than a more mundane mistake.

Comment by Pablo_Stafforini on [deleted post] 2021-04-30T16:37:34.145Z

Thanks for the suggestion. I've added a reminder to take a closer look.

Comment by Pablo_Stafforini on [deleted post] 2021-04-30T14:37:37.006Z

Thanks. I made a note to expand the entry. In case it isn't clear, the censoring in question refers to this.

Comment by Pablo_Stafforini on [deleted post] 2021-04-29T20:58:02.833Z

'Scope neglect' is about four times more popular on Google than 'scope insensitivity', and the name preferred by Wikipedia, so I would keep it.

Comment by Pablo_Stafforini on [deleted post] 2021-04-28T13:04:08.062Z

I very much agree with your points, especially the first one. Perhaps as we gain more experience, we can get a better sense of the types of articles that warrant a policy of linking to some external source by default. I can imagine that being the case e.g. for core philosophy topics (e.g. 'normative ethics') and the Stanford Encyclopedia of Philosophy.

Comment by Pablo_Stafforini on [deleted post] 2021-04-28T10:45:39.076Z

This was imported from EA Concepts. It needs significant updating. I've made a note to revise it, though feel free to make any changes you think are appropriate.

Comment by Pablo_Stafforini on [deleted post] 2021-04-28T00:41:15.197Z

Not sure how that happened. The archived version of the associated EA Concepts article doesn't have the '(see below)'. In any case, I removed it.

Comment by Pablo_Stafforini on [deleted post] 2021-04-27T19:48:49.724Z

The distinction is commonly used in academic philosophy, from which LW probably took it. On reflection, it may be worth having separate articles on instrumental and epistemic rationality, though I guess the two notions can also be discussed in the same article.

Comment by Pablo (Pablo_Stafforini) on What are your favorite examples of moral heroism/altruism in movies and books? · 2021-04-27T12:55:33.815Z · EA · GW

The documentaries on Vasili Arkhipov and Stanislav Petrov. Aptly, and despite being unrelated productions, both documentaries have the exact same title: The Man Who Saved the World.

Comment by Pablo_Stafforini on [deleted post] 2021-04-27T12:22:20.825Z

I would suggest renaming the article 'Movement growth' (omitting 'debate').

Comment by Pablo (Pablo_Stafforini) on Propose and vote on potential tags · 2021-04-27T12:19:00.698Z · EA · GW

I agree this should be separated. I've made a note to split the articles (and rearrange the content/tags accordingly).

Comment by Pablo_Stafforini on [deleted post] 2021-04-27T12:03:01.738Z

The principle I've been following is to treat LW like I do all other sources, and cite their articles iff they seem worthy of inclusion. (I do think it's always worth checking out the LW tags, because they are more likely to pass that test than most other sources.)

Comment by Pablo_Stafforini on [deleted post] 2021-04-27T11:56:24.320Z

I'd be opposed to such a norm. Very often, Wikipedia is not the best reference on a given topic, and their articles are already extremely easy to find. I would decide whether to cite them on a case by case basis, with a relatively high bar for citing them.

(A rule of this sort may be more plausible with reference works of exceptional quality, such as the Stanford Encyclopedia of Philosophy, but even then it doesn't feel to me like we should have such a rule. I guess the underlying intuition is that I don't see a reason to deviate from the general principle that decisions on whether to include a particular work in the bibliography should be based on an assessment of that specific work's quality and relevance, rather than on some general rule.)

Comment by Pablo_Stafforini on [deleted post] 2021-04-26T21:36:18.447Z

I merged this article with factory farming, which had been imported from EA Concepts.

Comment by Pablo_Stafforini on [deleted post] 2021-04-26T21:22:01.463Z

Yes, that's what I meant. Thanks for creating the articles.

Comment by Pablo (Pablo_Stafforini) on Propose and vote on potential tags · 2021-04-26T11:43:42.137Z · EA · GW

I think this would be useful to have.

Comment by Pablo (Pablo_Stafforini) on Propose and vote on potential tags · 2021-04-26T11:43:00.176Z · EA · GW

My sense is that it would be desirable to have both an overview  article about cognitive bias, discussing the phenomenon in general (e.g. the degree to which humans can overcome cognitive bias, the debate over how desirable it is to overcome them, etc.) as well as articles about specific instances of it.

Comment by Pablo (Pablo_Stafforini) on Propose and vote on potential tags · 2021-04-26T11:39:10.160Z · EA · GW

I agree that this should be added. I weakly prefer 'Fermi estimation'.

Comment by Pablo (Pablo_Stafforini) on Propose and vote on potential tags · 2021-04-26T11:37:20.937Z · EA · GW

Is the "epistemic challenge to longtermism" something like "the problem of cluelessness, as applied to longtermism", or is it something different?

Comment by Pablo_Stafforini on [deleted post] 2021-04-26T00:40:39.653Z

Yes, thanks. I'll merge it.