EA Forum Prize: Winners for December 2020

post by Aaron Gertler (aarongertler) · 2021-02-16T08:46:30.444Z · EA · GW · 1 comments


  What is the EA Forum Prize?
  About the winning posts and comments
  ”Patient vs urgent longtermism" has little direct bearing on giving now vs later
  My mistakes on the path to impact
  Health and happiness research topics — Part 1: Background on QALYs and DALYs
  Big List of Cause Candidates
  Improving Institutional Decision-Making: a new working group
  The winning comments
  The voting process
1 comment

CEA is pleased to announce the winners of the December 2020 EA Forum Prize! 

*Because Owen is a trustee of CEA, he won't receive the $500 prize associated with first place. We've bumped up the second and third-place prizes to match the typical first- and second-place prizes, respectively.

The following users were each awarded a Comment Prize ($75):

See here [? · GW] for a list of all prize announcements and winning posts.

What is the EA Forum Prize?

Certain posts and comments exemplify the kind of content we most want to see [? · GW] on the EA Forum. They are well-researched and well-organized; they care about informing readers, not just persuading them.

The Prize is an incentive to create content like this. But more importantly, we see it as an opportunity to showcase excellent work as an example and inspiration to the Forum's users.

About the winning posts and comments

Note: I write this section in first person based on my own thoughts, rather than by attempting to summarize the views of the other judges.

”Patient vs urgent longtermism" has little direct bearing on giving now vs later [EA · GW]

This post is a response to having heard multiple people express something like "I'm persuaded by the case for patient longtermism, so I want to save money rather than give now"[...] there is no direct implication that "patient longtermists" should be less willing to spend money now than "urgent longtermists".

I love that the very first sentence of the post explicitly states its purpose, and that the purpose is to make progress in a long-running discussion with practical implications for hundreds or thousands of people in the community. Posts in that category tend to be among the very best on the Forum.

Other things I like:

My mistakes on the path to impact [EA · GW]

Doing a lot of good has been a major priority in my life for several years now. Unfortunately I made some substantial mistakes which have lowered my expected impact a lot, and I am on a less promising trajectory than I would have expected a few years ago. In the hope that other people can learn from my mistakes, I thought it made sense to write them up here!

This is now the highest-karma Forum post of all time — this isn’t a perfect measure of quality, but it’s still a good indicator of how many people found the author’s story useful or insightful. Another indicator: The many excellent comments, some of which could have been strong posts unto themselves.

Things I appreciated in this post:

Health and happiness research topics — Part 1: Background on QALYs and DALYs [EA · GW]

HALYs have a number of major shortcomings in their current form. In particular, they:

  1. neglect non-health consequences of health interventions
  2. rely on poorly-informed judgements of the general public
  3. fail to acknowledge extreme suffering (and happiness)
  4. are difficult to interpret, capturing some but not all spillover effects
  5. are of little use in prioritising across sectors or cause areas

This can lead to inefficient allocation of resources, in healthcare and beyond.

This is a deeply-researched introduction to a crucial question in EA, and one that I expect to recommend to many, many people. In college, I read a long book on the science of subjective well-being; if I could, I’d take back the time I spent on that, read every word of this post, and end up equally well-informed with hours to spare.

Features of the post that I liked:

Big List of Cause Candidates [EA · GW]

In the last few years, there have been many dozens of posts about potential new EA cause areas, causes and interventions. Searching for new causes seems like a worthy endeavour, but on their own, the submissions can be quite scattered and chaotic. Collecting and categorizing these cause candidates seemed like a clear next step.

This certainly is a big list of cause candidates. Mostly, it seems really good that someone has produced a near-comprehensive list that can be used for reference (I’ve linked to it a few times already). Specific things that are nice about the way the list was made:

Improving Institutional Decision-Making: a new working group [EA · GW]

This post starts with a summary section that includes bold text. It was love at first sight. In fact, I’m going to just write the bold text and nothing else:

Recent and planned efforts to develop improving institutional decision-making (IIDM) as a cause area [...] IIDM remains underexplored [...] a working definition of IIDM [...] a new meta initiative aiming to disentangle and make intellectual progress on IIDM [...] you can get involved.

This is a good summary of what the post is about! Excellent work, team.

...anyway, this is exactly what I’d hope to see in a post introducing a new project. There are links to related resources and organizations, descriptions of the project’s plans and goals, and specific requests aimed at people who want to help (questions to answer, a form to fill out, and a description of the experience they’d find most valuable). 

As a bonus, the post includes the best breakdown I’ve seen of what “improving institutional decision-making” might actually entail (from aligning institutions’ values to improving the accuracy of their forecasts).

The winning comments

I won’t write up an analysis of each comment. Instead, here are my thoughts on selecting comments for the prize [EA · GW]. 

I recently made an update to the linked post — it might be worth reading again, even if you’ve seen it before.

The voting process

The current prize judges are:

All posts published in the titular month qualified for voting, save for those in the following categories: 

Voters recused themselves from voting on posts written by themselves or their colleagues. Otherwise, they used their own individual criteria for choosing posts, though they broadly agree with the goals outlined above.

Judges each had ten votes to distribute between the month’s posts. They also had a number of “extra” votes equal to [10 - the number of votes made last month]. For example, a judge who cast 7 votes last month would have 13 this month. No judge could cast more than three votes for any single post.

The winning comments were chosen by Aaron Gertler, though other judges had the chance to nominate comments and veto comments they didn’t think should win.


If the Prize has changed the way you read or write on the Forum, or you have an idea for how we could improve it, please leave a comment or contact me.


Comments sorted by top scores.

comment by NunoSempere · 2021-02-16T11:04:08.092Z · EA(p) · GW(p)

Nice! One other cool thing about the Big List of Cause Candidates is that people have been coming up with suggestions, and I have been updating [EA(p) · GW(p)] the list as they do so.

Incidentally, the Big List of Candidates post was selected as a project by using a very rudimentary forecasting/evaluation system, similar to the ones here [EA · GW] and here [EA · GW].  If you want to participate in that kind of thing by suggesting, carrying out or evaluating potential projects, you can sign up here.

In particular, as a novelty, I assigned a 50% chance that it would in fact get an EA forum prize.

Note that the forecast assumed that I was competing against fewer posts, but also that there would be fewer prizes, so the errors happily cancelled out. 

I think that that kind of forecast/comment:

  • Makes me look arrogant/not humble/unvirtuous, at least to some people. In particular, I strongly take the stance that the characters in In praise of unhistoric heroism [EA · GW] who are ~"contented by sweeping offices instead of chasing the biggest projects they can find" are in fact making a mistake by not asking the question "but what are the most valuable things I could be doing?" (or, by using a forecasting system/setup to explore that question)
  • Is still really interesting because I think that forecasting funding decisions might be a workable method in order to amplify [EA · GW] them [EA · GW], which is particularly valuable given that EA might be vetting constrained [EA · GW]. Ideally I (or other forecasters) would get to do that with EA funds or OP grants, but I thought that the forum prize could be a nice beginning.

The other posts I thought were particularly strong are:

I correctly guessed My mistakes on the path to impact [EA · GW] and  "Patient vs urgent longtermism" has little direct bearing on giving now vs later [EA · GW].