EA Forum Prize: Winners for January 2021post by Aaron Gertler (aarongertler) · 2021-04-02T02:58:43.260Z · EA · GW · None comments
What is the EA Forum Prize? About the winning posts and comments Why I'm concerned about Giving Green AMA: Ajeya Cotra, researcher at Open Phil Yale EA's Fellowship Application Scores were not Predictive of Eventual Engagement What does it mean to become an expert in AI Hardware? Why "cause area" as the unit of analysis? The winning comments The voting process Feedback None No comments
Meta: I've fallen a bit behind in writing these reports, but we're in the process of catching up.
CEA is pleased to announce the winners of the January 2021 EA Forum Prize!
- In first place (for a prize of $500): “Why I'm concerned about Giving Green [EA · GW],” by alexrjl [EA · GW].
- Second place ($0*): “AMA: Ajeya Cotra, researcher at Open Phil [EA · GW],” by Ajeya Cotra [EA · GW].
- Third place ($300): “Yale EA's Fellowship Application Scores were not Predictive of Eventual Engagement [EA · GW],” by Thomas Woodside [EA · GW] and Jessica McCurdy [EA · GW].
- Fourth place ($300): “What does it mean to become an expert in AI Hardware? [EA · GW],” by Christopher Phenicie [EA · GW].
- Fifth place ($300): “Why "cause area" as the unit of analysis? [EA · GW],” by Issa Rice [EA · GW].
*Because Ajeya works for Open Philanthropy, a major funder of CEA, she won't receive the $300 prize for second place. Instead, we've distributed that money evenly between the third, fourth, and fifth-place winners (who now receive $300 instead of $200).
The following users were each awarded a Comment Prize ($75):
- Asya Bergal [EA(p) · GW(p)] on AI hardware forecasting
- Chi [EA(p) · GW(p)] on the use of qualifying statements
- This was a Shortform post, which falls halfway between a post and a comment in my mind. Judges are able to vote for these, but in practice they tend to be obscured by frontpage posts during the voting round. I thought this was a reasonable candidate for a “comment prize”, and may select other Shortform posts for comment prizes in the future.
- JamesOz [EA(p) · GW(p)] on social activism around climate change
- Johannes Ackva [EA(p) · GW(p)] on effective climate philanthropy
- Ozzie Gooen [EA(p) · GW(p)] on the usefulness of large organizations
What is the EA Forum Prize?
Certain posts and comments exemplify the kind of content we most want to see [? · GW] on the EA Forum. They are well-researched and well-organized; they care about informing readers, not just persuading them.
The Prize is an incentive to create content like this. But more importantly, we see it as an opportunity to showcase excellent work as an example and inspiration to the Forum's users.
About the winning posts and comments
Note: I write this section in first person based on my own thoughts, rather than by attempting to summarize the views of the other judges.
“Giving Green’s evaluation process involves substantial evidence collection and qualitative evaluation, but eschews quantitative modelling, in favour of a combination of metrics which do not have a simple relationship to cost-effectiveness [...] in every case where I investigated an original recommendation made by Giving Green, I was concerned by the analysis to the point where I could not agree with the recommendation.”
(I left feedback on several drafts of this post before it was published.)
This is one of my favorite Forum posts of the year, perhaps of all time. The author read an organization’s work, saw something that surprised him, and decided to write up his findings. This led to an extremely informative public conversation and substantial changes from the organization in question — and it’s unclear whether any of this would have happened without the post.
A few of the many things I like about the post:
- Alex calls out the potential of Giving Green before starting in on criticism. He makes it clear that he wants the organization to improve, rather than shut down — and frames his criticism as suggestions for improvement, rather than indictments.
- He justifies the force of his critique by pointing out cases in which Giving Green’s work has been favorably reviewed. The organization’s flaws seem more serious once we realize that they’ve already escaped the notice of reviewers at The Atlantic and Vox.
- He took the time, before publishing, to discuss his issues with Giving Green directly. This lets him present their side of the story (he also takes care to point out where he agrees with them). I’ve seen other critical posts get bogged down in confusion when the target of criticism disputes the original claim. If you want to write a post that criticizes someone, I recommend following Alex’s lead and talking to them first!
- That said, if adding that extra step would be a barrier to publishing, just publish! This is meant as a recommendation, not a rule.
I can’t really summarize this post, since the topics were (of course) highly varied. Instead, I’ll link to some of my favorite answers:
- Three types of disagreements that tend to crop up around AI timelines [EA(p) · GW(p)]
- Reasons that someone might believe in shorter timelines than Ajeya [EA(p) · GW(p)]
- Progress and bottlenecks in the field of AI forecasting [EA(p) · GW(p)]
- Ajeya’s career trajectory, and what she likes and dislikes about her work [EA(p) · GW(p)]
One common feature of Ajeya’s answers: When she encounters someone whose beliefs are different from hers, she always seems to ask about how the person’s models/definitions/values differ from hers in ways that could explain their beliefs, rather than assuming they must be wrong about something. This isn’t uncommon in Forum posts, but I think Ajeya does this more explicitly than most people, and I found it really valuable to see her think out loud.
“After we noticed a couple instances of people who had barely been accepted to the fellowship becoming extremely engaged with the group, we decided to do an analysis of our scoring of applications and eventual engagement. We found no correlation.
“We think this shows the possibility that some of the people we have rejected in the past could have become extremely engaged members, which seems like a lot of missed value.”
When the leadership of Yale EA found that their fellowship selection criteria hadn’t achieved what they’d hoped for, they didn’t just change the criteria. They also took the time to rescind their previous advice in public (so that other groups wouldn’t make the same assumptions), and to explain their methods in enough detail that other groups could easily replicate them.
I think that posts which do this tend to be among the Forum’s most valuable — people are more likely to take action when you explain exactly what they’ll have to do.
I also like that the authors took the time to lay out their past rationale, and didn’t completely abandon it: “We still think these are good and important reasons for keeping the fellowship smaller. However, we are currently thinking that the possibility of rejecting an applicant who would have become really involved outweighs these concerns.”
Emphasis mine on that last point, because I think the best posts are generally those which:
- Acknowledge the positive aspects of multiple perspectives/theories/etc.
- Make it clear that a positive thing being outweighed doesn’t mean it doesn’t matter.
“Recently, I have been trying to figure out what to do with my career, and came across this 80,000 Hours post [EA · GW] that mentioned AI hardware. I figured I might be able to work in this area, so I’ve spent a little time (~100 hours) looking into this topic. This post is a summary of my initial takeaways from exploring this, as well as an open invitation to comment/critique/collaborate on my personal career plans.”
I really like the approach Christopher took for this post — he noticed that an organization had some speculative thoughts on a particular area, and decided to see what he could learn about that area, not only for the sake of his own career but to help others who might go down the same path.
I don’t have much to say past that — he took a good idea and executed well, creating a well-organized post that clearly emerged from a mountain of research.
...actually, one more thing I really liked was the fifth section [EA · GW], where he not only speculates on potential career paths in AI hardware but also seeks out “role models”, people who have actually taken those paths. In my experience, a lot of theory and supposition is often less valuable than having real-world examples of what something actually looks like, because reality has a surprising amount of detail.
“I came away from the above investigation feeling pretty confused about the nature of cause areas. Given just a description of reality, it didn't seem obvious to me to carve things out into "cause areas" and to take "cause area" as the basic unit of analysis/prioritization.”
In general, Issa’s concerns about the many meanings of “cause area” seemed highly relevant to what I often see people do on the Forum and elsewhere — throwing around the term in a bunch of different ways that make it difficult to compare “cause areas”, and make questions like “is X a good cause area to work in?” hard to answer.
I also liked this section, in which Issa shares an additional concern about the widespread use of a hard-to-define concept:
“In terms of public discourse, people are actually using the concept of "cause area" to do further thinking. If the idea of a cause area is not a reliable one, then all of this further thinking is done on a shaky foundation, which seems worrying.”
Finally, I appreciated the way Issa approached his question with genuine curiosity (using the format of a question rather than a standard post, and asking for links to prior discussions). I didn’t get the sense that he was dedicated to some alternative definition — just that he was trying to explore the territory around a term.
The winning comments
The voting process
The current prize judges are:
- Aaron Gertler [EA · GW]
- Larks [EA · GW]
- Luisa Rodriguez [EA · GW]
- Peter Hurford [EA · GW]
- Rob Wiblin [EA · GW]
- Vaidehi Agarwalla [EA · GW]
All posts published in the titular month qualified for voting, save for those in the following categories:
- Procedural posts from CEA and EA Funds (for example, posts announcing a new application round for one of the Funds)
- Posts linking to others’ content with little or no additional commentary
- Posts which got fewer than five additional votes after being posted (not counting the author’s automatic vote)
Voters recused themselves from voting on posts written by themselves or their colleagues. Otherwise, they used their own individual criteria for choosing posts, though they broadly agree with the goals outlined above.
Judges each had ten votes to distribute between the month’s posts. They also had a number of “extra” votes equal to [10 - the number of votes made last month]. For example, a judge who cast 7 votes last month would have 13 this month. No judge could cast more than three votes for any single post.
The winning comments were chosen by Aaron Gertler, though other judges had the chance to nominate and veto comments before this post was published.
If the Prize has changed the way you read or write on the Forum, or you have an idea for how we could improve it, please leave a comment or contact me.
Comments sorted by top scores.