Mogensen & MacAskill, 'The paralysis argument' 2021-07-19T14:04:15.801Z
[Future Perfect] How to be a good ancestor 2021-07-02T13:17:15.686Z
Anki deck for "Some key numbers that (almost) every EA should know" 2021-06-29T22:13:09.233Z
Christian Tarsney on future bias and a possible solution to moral fanaticism 2021-05-06T10:39:38.949Z
Spears & Budolfson, 'Repugnant conclusions' 2021-04-04T16:09:30.922Z
AGI Predictions 2020-11-21T12:02:35.158Z
UK to host human challenge trials for Covid-19 vaccines 2020-09-23T14:45:01.278Z
Yew-Kwang Ng, 'Effective Altruism Despite the Second-best Challenge' 2020-05-09T16:47:36.346Z
Phil Trammell: The case for ignoring the world’s current problems — or how becoming a ‘patient philanthropist’ could allow you to do far more good 2020-03-17T17:00:14.108Z
Good Done Right conference 2020-02-04T13:21:02.903Z
Cotton‐Barratt, Daniel & Sandberg, 'Defence in Depth Against Human Extinction' 2020-01-28T19:24:48.033Z
Announcing the Bentham Prize 2020-01-21T22:23:16.860Z
Pablo_Stafforini's Shortform 2020-01-09T15:10:48.053Z
Dylan Matthews: The case for caring about the year 3000 2019-12-18T01:07:49.958Z
Are comment "disclaimers" necessary? 2019-11-23T22:47:01.414Z
Teruji Thomas, 'The Asymmetry, Uncertainty, and the Long Term' 2019-11-05T20:24:00.445Z
A wealth tax could have unpredictable effects on politics and philanthropy 2019-10-31T13:05:28.421Z
Schubert, Caviola & Faber, 'The Psychology of Existential Risk' 2019-10-22T12:41:53.542Z
How this year’s winners of the Nobel Prize in Economics influenced GiveWell’s work 2019-10-19T02:56:46.480Z
A bunch of new GPI papers 2019-09-25T13:32:37.768Z
Andreas Mogensen's "Maximal Cluelessness" 2019-09-25T11:18:35.651Z
'Crucial Considerations and Wise Philanthropy', by Nick Bostrom 2017-03-17T06:48:47.986Z
Effective Altruism Blogs 2014-11-28T17:26:05.861Z
The Economist on "extreme altruism" 2014-09-18T19:53:52.287Z
Effective altruism quotes 2014-09-17T06:47:27.140Z


Comment by Pablo (Pablo_Stafforini) on Propose and vote on potential EA Wiki entries · 2021-07-29T15:03:11.304Z · EA · GW

Here's what I did:

  • I renamed direct democracy to ballot initiative.
  • I added two new entries: democracy and safeguarding liberal democracy. The first covers any posts related to democracy, while the second covers specifically posts about safeguarding liberal democracy as a potentially high-impact intervention.

I still need to do some tagging and add content to the new entries.

Comment by Pablo (Pablo_Stafforini) on Propose and vote on potential EA Wiki entries · 2021-07-28T22:54:09.678Z · EA · GW

I agree. I'll deal with this tomorrow (Thursday), unless anyone wants to take care of it.

Comment by Pablo_Stafforini on [deleted post] 2021-07-27T15:11:59.867Z

Further to my previous message: What do you think about creating a long-range forecasts tag for posts that contain such forecasts, and to reserve long-range forecasting for posts that discuss the phenomenon? I don't have a clear enough sense of how this problem manifests itself in other articles, so I'm not proposing any general solution for the time being. But this seems like an adequate way to address this particular manifestation. 

Comment by Pablo_Stafforini on [deleted post] 2021-07-26T12:44:21.636Z

Do people have thoughts on when we should regard a request as "closed" when the post contains no explicit deadline or other relevant criteria? Some options:

  1. Close the request after a certain number of weeks or months has elapsed since the post's publication.
  2. Do what feels intuitive in each case.
  3. Never close requests that do not naturally expire.

I haven't been engaging much with these sorts of posts, and don't have a strong preference for one of these options over the others.

Comment by Pablo (Pablo_Stafforini) on Propose and vote on potential EA Wiki entries · 2021-07-26T12:04:39.607Z · EA · GW

What do you think about the policy change entry? One option is to rename it to just policy and use it as the "mega-tag" you propose.

Comment by Pablo (Pablo_Stafforini) on Propose and vote on potential EA Wiki entries · 2021-07-26T12:00:33.157Z · EA · GW

I agree that the concept of an open society as you characterize it has a clear connection to EA. My sense is that the term is commonly used to describe something more specific, closely linked to the ideas of Karl Popper and the foundations of George Soros (Popper's "disciple"), in which case the argument for adding a Wiki entry would weaken. Is my sense correct? I quickly checked the Wikipedia article, which broadly confirmed my impression, but I haven't done any other research.

Comment by Pablo_Stafforini on [deleted post] 2021-07-24T14:32:03.101Z

This is a general problem: for many entries, posts can be potentially relevant by virtue of either discussing the topic of the entry or exemplifying the phenomenon the entry describes. So we probably want to think about possible general ways to deal with this problem rather than solutions for this specific instance. Still, it seems fine to discuss that here. I don't think I have any insights to offer off the top of my head, but will try to think about this a bit more later.

Comment by Pablo_Stafforini on [deleted post] 2021-07-24T14:24:39.898Z

Cool. I've now "followed" the author on Google Scholar to be alerted whenever he publishes something new.

Comment by Pablo (Pablo_Stafforini) on Anki deck for "Some key numbers that (almost) every EA should know" · 2021-07-20T23:02:02.039Z · EA · GW

The content of those cards also represented the biggest update for me. I wouldn't have guessed that the truth was roughly "two third of neurons are invertebrate neurons, one third of neurons are fish neurons".

Comment by Pablo (Pablo_Stafforini) on [Podcast] Suggest a question for Jeffrey Sachs · 2021-07-20T12:31:43.879Z · EA · GW

FYI: the episode is now published.

Comment by Pablo (Pablo_Stafforini) on Propose and vote on potential EA Wiki entries · 2021-07-19T13:20:07.670Z · EA · GW

Either looks good to me. I agree that this is worth having.

Comment by Pablo (Pablo_Stafforini) on Anki deck for "Some key numbers that (almost) every EA should know" · 2021-07-18T12:37:23.843Z · EA · GW

I agree that's one way in which the estimate may be misleading. The author lists this and other ways in a dedicated section. I revised the note to add a link to that section.

Comment by Pablo (Pablo_Stafforini) on Arne's Shortform · 2021-07-17T12:15:20.485Z · EA · GW

The repugnant conclusion is presented as an objection to certain views in population axiology. The claim is that a possible world containing sufficiently many morally relevant beings just above neutrality is intrinsically better than a possible world with arbitrarily many beings arbitrarily happy. The claim is not that these worlds could become actual, so empirical considerations of the sort you describe aren't relevant for assessing the force of the objection.

Put differently, theories like total utilitarianism imply that the "repugnant" world would be better if it existed, and the objection is that this implication is implausible. The implausibility would remain even if it was shown that the "repugnant" world cannot exist.

Comment by Pablo (Pablo_Stafforini) on Anki deck for "Some key numbers that (almost) every EA should know" · 2021-07-15T23:53:51.666Z · EA · GW

Done. This will be reflected when I release the next version, probably in a few weeks.

Comment by Pablo (Pablo_Stafforini) on MichaelA's Shortform · 2021-07-15T12:28:19.834Z · EA · GW

Turning the EA Wiki into a (huge) Anki deck is on my list of "Someday/Maybe" tasks. I think it might be worth waiting a bit until the Wiki is in a more settled state, but otherwise I'm very much in favor of this idea.

There is an Anki deck for the old LW wiki. It's poorly formatted and too coarse-grained (one note per article), and some of the content is outdated, but I still find it useful, which suggests to me that a better deck of the EA Wiki would provide considerable value.

Comment by Pablo_Stafforini on [deleted post] 2021-07-12T15:56:25.737Z

(A complication is that there are other entries that use the word 'cause', such as cause candidates, criticism of effective altruism causes, less discussed causes, and others. So perhaps we could retain the word 'cause' for this entry, but still have the section on terminology.)

Comment by Pablo_Stafforini on [deleted post] 2021-07-12T15:51:55.300Z

Looking at this old article from EA Concepts made me realize that we may want to have either an entry or a section discussing a cluster of terms that are often used interchangeably or to express related concepts, such as cause, problem, area, intervention, and program. In connection to this, we may also want to rename this entry to just prioritization to remain neutral on the relevant unit of analysis. Let me know if you have any thoughts on this; currently, my very tentative plan is to

  1. Rename this entry.
  2. Have the terminological discussion as a section within this entry.
Comment by Pablo_Stafforini on [deleted post] 2021-07-08T16:10:52.374Z

I'm not satisfied with the name for this entry, but I wasn't able to come up with a better one. Regardless of what we call it, I think we should have an article on the topic. If anyone has naming suggestions, please leave them below.

Comment by Pablo_Stafforini on [deleted post] 2021-07-08T14:36:01.077Z

Thanks, and sorry for the delay.

I've now merged the two articles. The other article should be deleted; but since deleting an entry also removes its associated comments thread, I will leave it up for a week or so, in order to allow others to decide if they want to preserve some of the content in those comments.

(I don't have a strong opinion on whether the article should be called moral psychology, psychology of effective altruism, or something else. But it looks like we should have just one article, regardless of what we call it.)

Comment by Pablo (Pablo_Stafforini) on You are allowed to edit Wikipedia · 2021-07-07T12:34:50.077Z · EA · GW

I agree with this advice.

I think a simple way to get involved with Wikipedia is to "adopt" an article on an important topic you are familiar with but which is currently covered inadequately. This will allow you to see how your changes are received, develop a relationship with other editors who contribute regularly on that page, and experience the satisfaction of seeing the article (hopefully) improve over time in part thanks to your efforts.

Comment by Pablo (Pablo_Stafforini) on Propose and vote on potential EA Wiki entries · 2021-07-03T12:18:42.603Z · EA · GW

Yeah, I made a note to create an entry on this topic soon after Luke published his post. Feel free to create it, and I'll try to expand it next week (I'm a bit busy right now).

Comment by Pablo (Pablo_Stafforini) on EA syllabi and teaching materials · 2021-07-03T00:49:20.220Z · EA · GW

Thanks! I added the first one to this list.

Comment by Pablo (Pablo_Stafforini) on Christian Tarsney on future bias and a possible solution to moral fanaticism · 2021-06-30T18:34:33.271Z · EA · GW

I agree with your suggestions. Not sure I have anything insightful to add, except perhaps that the initial, non-official post could also be updated to point to the subsequent official release, to better integrate the two posts?  One way to do this is to replace the "linkpost" external link with a link to the EA Forum post by 80k. So e.g. for this particular post one would replace with[official-80k-post-announcing-tarsney-episode]/. This would require asking the authors of the original post to update those links. (I'd assume everyone would be okay with it, but it may add a minor layer of friction for you.)

Comment by Pablo (Pablo_Stafforini) on Anki deck for "Some key numbers that (almost) every EA should know" · 2021-06-30T12:56:05.021Z · EA · GW

I second JP's recommendation. A couple of additional good resources are Michael Nielsen's augmenting long-term memory and Gwern's spaced repetition for efficient learning.

Comment by Pablo (Pablo_Stafforini) on Anki deck for "Some key numbers that (almost) every EA should know" · 2021-06-30T12:33:50.882Z · EA · GW

Thanks, I'll try to add these shorty.

Comment by Pablo (Pablo_Stafforini) on Anki deck for "Some key numbers that (almost) every EA should know" · 2021-06-30T12:27:42.332Z · EA · GW

Yes, you can read the contents here. This is the org mode file I use to generate the Anki deck (with the Anki editor package), so it will always reflect the most recent version.

(I've edited the original post to add this information.)

Comment by Pablo (Pablo_Stafforini) on Shelly Kagan - readings for Ethics and the Future seminar (spring 2021) · 2021-06-29T17:56:28.335Z · EA · GW

In case it isn't clear, this was published a couple of days ago here. I mention it because the original blog post lists other courses that may also be of interest.

Comment by Pablo_Stafforini on [deleted post] 2021-06-29T13:56:58.161Z


Comment by Pablo (Pablo_Stafforini) on Propose and vote on potential EA Wiki entries · 2021-06-28T14:41:20.398Z · EA · GW

Hi nil,

I've edited the FAQ to make our inclusion criteria more explicit.

Comment by Pablo (Pablo_Stafforini) on Issues with Using Willingness-to-Pay as a Primary Tool for Welfare Analysis · 2021-06-28T13:53:42.760Z · EA · GW

Thank you for this informative answer!

I vaguely recall hearing an economist say that welfare economics ceased to be part of the undergraduate curricula in American universities at some point in the past. I wonder if it might be worth tracing down the history of this development and examine it as a potentially instructive case study. Quick googling uncovers an interview with Amartya Sen in which the Indian economist recommends Tony Atkinson's The strange disappearance of welfare economics as the "best article on that sad neglect".

Comment by Pablo (Pablo_Stafforini) on What are some key numbers that (almost) every EA should know? · 2021-06-28T13:34:46.319Z · EA · GW

Thanks! It hadn't occurred to me to use the graph as the figure, but that's a good idea. On reflection, we could perhaps use "image occlusion" for this or other questions.

Comment by Pablo (Pablo_Stafforini) on What are some key numbers that (almost) every EA should know? · 2021-06-28T13:31:24.062Z · EA · GW


Comment by Pablo (Pablo_Stafforini) on A full syllabus on longtermism · 2021-06-28T00:24:53.409Z · EA · GW

I have compiled a list of courses on longtermism and related topics, which includes both this one and the one taught by Prof. Shelly Kagan, as well as a number of other courses.

Comment by Pablo (Pablo_Stafforini) on What are some key numbers that (almost) every EA should know? · 2021-06-27T22:35:57.505Z · EA · GW

Added most  of these, but would appreciate suggestions for the following:

  • how many big power transitions ended in war
  • roughly how much they value their own time
Comment by Pablo (Pablo_Stafforini) on What are some key numbers that (almost) every EA should know? · 2021-06-27T22:16:49.612Z · EA · GW

We've now turned most of these into Anki cards, but I'd appreciate pointers to reliable sources or estimates for the following:

  • Net present value of expected total EA-aligned capital by cause area/worldview
  • Number of people working on certain cause areas such as AI safety, GCBR reduction, nuclear security, ...
  • How much total compute there is, and how it's distributed (e.g. supercomputers vs. gaming consoles vs. personal computers vs. ...)
  • How much EAs should discount future financial resources
  • Size of the EA community

For others, I have the relevant information (or know where to find it), but am not sure what numbers should be used to express it:

  • The 'Great Decoupling' of labor productivity from jobs + wages in the US
  • Some key stats about the distribution of world income and how it has changed, e.g., Milanovic's "elephant graph" and follow-ups
  • Some key stats about impact distributions where we have them, e.g., on how heavy-tailed the DCP2 global health cost-effectiveness numbers are

(This is addressed to anyone in a position to help, not just to Max. Thanks.)

Comment by Pablo (Pablo_Stafforini) on Issues with Using Willingness-to-Pay as a Primary Tool for Welfare Analysis · 2021-06-27T13:41:37.237Z · EA · GW

I would like someone with a background in both economics and EA to offer an articulation of the best defense of using willingness-to-pay in cost-benefit analysis. My experience is that when people raise this objection, many economists (e.g. Robin Hanson) respond by saying that the critics haven't really understood the methods of economics. But I have never seen a clear explanation of why the objection is mistaken.

I think it is also worth noting that the economists themselves do not appear to apply willingness-to-pay consistently. John Broome (an economist by training) explains (Climate Matters, pp. 144–145):

If people are richer in the future, that means additional commodities bring less benefit on average to future people than the same commodities bring to present people. A kilo of rice in one hundred years will contribute less on average to the well-being of the people who eat it than a kilo contributes today. This is a good reason for discounting future commodities. Ironically, although cost-benefit analysts generally ignore the diminishing marginal benefit of money when they are aggregating value across people at a single date, their main case for discounting future commodities is founded on this diminishing marginal benefit. 

Comment by Pablo_Stafforini on [deleted post] 2021-06-27T12:42:15.195Z

Thanks for creating this entry! Coincidentally, I noticed the relevant episode of the Neoliberal Podcast on my podcast app feed yesterday and wondered if we should have an article on this topic.

Comment by Pablo (Pablo_Stafforini) on Propose and vote on potential EA Wiki entries · 2021-06-26T14:59:29.879Z · EA · GW

I'm in favor. Very weak preference for alternative foods until resilient foods becomes at least somewhat standard.

Comment by Pablo (Pablo_Stafforini) on Propose and vote on potential EA Wiki entries · 2021-06-25T20:26:36.330Z · EA · GW

I now feel that a number of unresolved issues related to the Wiki ultimately derive from the fact that tags and encyclopedia articles should not both be created in accordance with the same criterion. Specifically, it seems to me that a topic that is suitable for a tag is sometimes too specific to be a suitable topic for an article.

I wonder if this problem could be solved, or at least reduced, by allowing article section headings to also serve as tags. I think this would probably be most helpful for articles that cover particular disciplines, such as psychology or computer science. Here it seems that it makes most sense to have a single article covering each discipline, yet multiple tags discussing different aspects of the discipline, such as research on that discipline, careers in that discipline, or applications of that discipline. Currently we take a hybrid approach, sometimes having entries for the discipline as a whole and sometimes for specific aspects of it.

Another advantage of allowing article sections to be used as tags is that some tags are currently associated with a very large number of posts. This suggests that a more fine-grained taxonomy of tags would organize the contents of Forum better, and allow users to find the material they want more easily.

A complication is that not all section headings will be suitable for tags. This issue could be solved in various ways. For example, the search field that opens when the user clicks on 'Add tag' could by default  only show the tags corresponding to article titles, just as it does currently. However, the user could be given the choice of expanding the tag to display the corresponding headings, and allow them to select among any of these. Perhaps headings already selected as tags by previous users could be shown by default in future searches.

I'm not particularly confident that this is a good idea. But it does seem like something at least worth discussing further.

Comment by Pablo (Pablo_Stafforini) on The EA Forum Editing Festival has begun! · 2021-06-25T18:13:42.920Z · EA · GW

I have now updated the Tag Portal so that it reflects the current state of the Wiki, and will henceforth add to it any newly created entries, so it should always remain up-to-date.

Comment by Pablo (Pablo_Stafforini) on Propose and vote on potential EA Wiki entries · 2021-06-24T15:16:45.923Z · EA · GW

Agree we should have such an entry (I had it in my list of planned articles).

Comment by Pablo (Pablo_Stafforini) on What are some key numbers that (almost) every EA should know? · 2021-06-24T15:15:54.075Z · EA · GW

I was thinking of announcing it in a separate post, given how much interest this has attracted.

Comment by Pablo (Pablo_Stafforini) on What are some key numbers that (almost) every EA should know? · 2021-06-24T15:14:44.817Z · EA · GW

Hi, yes, we will have a 'source' field.

I say 'we' because I'm doing this in collaboration with a user who kindly volunteered to help. I think we should be done by the end of the week.

Comment by Pablo_Stafforini on [deleted post] 2021-06-23T14:20:57.249Z

We already have entries on prize and Forum Prize. I suggest renaming the former to inducement prize and deleting this one.

Comment by Pablo_Stafforini on [deleted post] 2021-06-22T12:42:28.358Z

I don't see much justification for this entry, given that we already have articles on global catastrophic risk, existential risk, s-risk, and ethics of existential risk.

EDIT: Ah, I now see that the entry was created simultaneously with this question about moral catastrophes in history, which was however not tagged with it. This makes it clearer to me what the author had in mind as the topic of the entry. I'm still unsure we should have an entry on 'moral catastrophes' as such, though perhaps we could have some kind of entry about historical catastrophes. If the emphasis is on catastrophes that were overlooked at the time they occurred, another option is to discuss them in the Cause X or moral circle expansion entries.

Comment by Pablo (Pablo_Stafforini) on A ranked list of all EA-relevant documentaries, movies, and TV series I've watched · 2021-06-21T12:59:07.639Z · EA · GW

Another relevant film is The Day After, which was seen by 100 million Americans—"the most-watched television film in the history of the medium" (Hänni 2016)— and was instrumental in changing Reagan’s nuclear policy.

  • “President Ronald Reagan watched the film several days before its screening, on November 5, 1983. He wrote in his diary that the film was "very effective and left me greatly depressed," and that it changed his mind on the prevailing policy on a "nuclear war". The film was also screened for the Joint Chiefs of Staff. A government advisor who attended the screening, a friend of Meyer's, told him "If you wanted to draw blood, you did it. Those guys sat there like they were turned to stone." Four years later, the Intermediate-Range Nuclear Forces Treaty was signed and in Reagan's memoirs he drew a direct line from the film to the signing.” (Wikipedia)
  • “Director Meyer and writer Hume produced The Day After to support nuclear disarmament with the ‘grandiose notion that this movie would unseat Ronald Reagan’, and the nuclear freeze groups heavily exploited the ABC movie as a propaganda.” (Hänni 2016)
Comment by Pablo_Stafforini on [deleted post] 2021-06-20T12:39:05.131Z

I vaguely share your feeling that posts "count for more" than comments, though I can't think of a better heuristic than the one I proposed, so for simplicity I just used the text in my previous comment. Feel free to refine it.

I also removed the paragraph referring to the upper bound, and revised the paragraph that followed it, for unrelated reasons. (I think it's something someone added to the Google Doc I circulated, which I didn't initially read very carefully. As it was worded, the paragraph gave readers advice on how to write posts, rather than on how to tag those posts, which should be the focus of the Tagging Guidelines.)

Comment by Pablo_Stafforini on [deleted post] 2021-06-19T16:16:33.249Z


The general tagging principle is that a tag should be added to a post when the post, including its comments thread, contains a substantive discussion of the tag's topic. As a very rough heuristic, to count as "substantive" a discussion has to be the primary focus of at least one paragraph or five sentences in the post or the associated comments.

This assumes we want to use the same heuristic for posts and comments, though your final bullet point seems to implicitly question this assumption.

(If we adopt this revision, other parts of the document may also need to be revised. For example, one can no longer infer an upper bound from the heuristic and the length of the post.)

Comment by Pablo (Pablo_Stafforini) on A ranked list of all EA-relevant documentaries, movies, and TV series I've watched · 2021-06-19T12:32:20.664Z · EA · GW

Ozy Brennan:

I have recently watched The Story of Louis Pasteur, a 1936 movie about, well, Louis Pasteur. I am not sure I recommend it artistically. It’s weirdly paced and its occasional gestures towards characterization only make it more obvious how much everyone in the story is a cardboard cutout. However, I have never seen a more effective altruist movie in my life.

Comment by Pablo (Pablo_Stafforini) on What are some key numbers that (almost) every EA should know? · 2021-06-19T12:16:57.747Z · EA · GW

Wow, I didn't know about this feature.