I'm a full-time content writer at CEA. I started Yale's student EA group, and I've also volunteered for CFAR and MIRI. I spend a few hours a month advising a small, un-Googleable private foundation that makes EA-adjacent donations.
Before joining CEA, I was a tutor, a freelance writer, a tech support agent, and a music journalist. I blog, and keep a public list of my donations, at aarongertler.net.
Titles aren't my forte. I'd keep it simple. "Lessons learned from six months of forecasting" or "What I learned after X hours of forecasting" (where "X" is an estimate of how much time you spent over six months).
I also find utilitarian thinking to be more useful/practical than "longtermist thinking". That said, I haven't seen much advocacy for longtermism as a guide to personal action, rather than as a guide to research that much more intensively attempts to map out long-term consequences.
Maybe an apt comparison would be "utilitarianism is to decisions I make in my daily life as longtermism is to the decisions I'd make if I were in an influential position with access to many person-years of planning". But this is me trying to guess what another author was thinking; you could consider writing to them directly, too.
(I assume you've heard/considered points of this type before; I'm writing them out here mostly for my own benefit, as a way of thinking through the question.)
I don't think Will or any other serious scholar believes that it is "workable". It reads to me like a theoretical assumption that defines a particular abstract philosophy.
"Looking at every possible action, calculating the expected outcome, and then choosing the best one" is also a laughable proposition in the real world, but the notion of "utilitarianism" still makes intuitive sense and can help us weigh how we make decisions (at least, some people think so). Likewise, the notion of "longtermism" can do the same, even if looking 1000 years into the future is impossible.
On "large returns to reason": My favorite general-purpose example of this is to talk about looking for a good charity, and then realizing how much better the really good charities were than others I had supported. I bring up real examples of where I donated before and after discovering EA, with a few rough numbers to show how much better I think I'm now doing on the metric I care about ("amount that people are helped").
I like this approach because it frames EA as something that can help a person make a common decision -- "which charity to support?" or "should I support charity X?" -- but without painting them as ignorant or preferring less good (in these conversations, I acknowledge that most people don't think much about decisions like this, and that not thinking much is reasonable given that they don't know how huge the differences in effectiveness can be).
While I don't think we'd want to consider posts for a Forum Prize if they were never posted on the Forum (that would expand our remit to an enormous amount of content), I've also been an avid reader of Zvi's work and would happily contribute $100 to a joint prize.
If you end up putting something together, you can reach me through a Forum message or via email.
When you post a chart like this, I recommend linking to the source. Thomas linked to a blog post below, but this was also posted on the Forum. The initial comment touches on your concern, but I don't think explains CE's beliefs fully.
I was sad to read this, but I hope it also gives current community-builders a chance to learn from someone who has been part of the community for a long time.
What's an example of a time the community produced something you found to be intellectually exciting? Do any old posts or discussions come to mind?
What felt different about the community that used to make you feel a stronger incentive to actually do good? Are there ways you used to share your progress which aren't available, or which don't feel valued anymore?
Would you mind expanding on how you see this as relating to effective altruism? I don't see a clear connection.
Comment by aarongertler on [deleted post]
One of the main uses of posting to the Forum is not for readership but for feedback. And some of the worst posts may be exactly those that could benefit the most from feedback.
It seems to me like this post got a reasonable amount of feedback. The top two upvoted comments took issue with different elements of the post, and I think those commenters explained their points well.
In my experience, heavily-downvoted posts often get a lot of feedback, at least relative to the number of people who vote on them at all. I looked up a bunch of recentposts with negative karma, and they all got comments explaining why people downvoted. Even this post (with only three total votes) at least had someone asking a reasonable question about its conclusion.
Do any counterexamples come to mind? Posts or comments with a lot of downvotes and little-to-no feedback in the form of critical replies?
Often when I see posts heavily downvoted / other comments upvoted, it's because they seem to hit a nerve that a large part of the community deeply cares about, but the comment responses don't make this clear (it is confusing!). For example, there have been a bunch of emotionally charged threads on transparency vs. censorship.
Again, I'd be interested to see examples. I've written at least twoposts that touch on issues of transparency and/or censorship, and they both got plenty of critical attention that (to me) made it clear what people were concerned about. Other posts on controversial topics also seem to fit this description (when they get more than a couple of votes overall).
If you see the Forum as more of a professional thing, I would hope we could eventually have some other alternative to give feedback to people on their written up thoughts and early blog posts.
I also think that feedback on the Forum tends to be more helpful (on average) than you'd get on almost any other free online platform. My main criticism of the Forum's commentariat is that they don't write enough comments (I'd love to see people get more feedback), but I don't know what alternative platform would be better in that regard.
A question: Do you think the Forum would be a better site, overall, if it had only upvotes and comments, but no downvotes? This would reduce the chance of people getting discouraged by downvotes, but it would also lead to an atmosphere where posts were (by default) ranked by how much attention they received, rather than by how good people thought they were. That seems worse to me.
Comment by aarongertler on [deleted post]
I assume one of the references you mentioned is in this comment. Do you happen to remember where else it was brought up?
If I could see that comment, I'd want to leave a reply to push against it, since I think the crybaby article is terrible and not a mindset the Forum should encourage at all.
Comment by aarongertler on [deleted post]
There's some nasty subtext in the voting patterns where things are heavily upvoted and downvoted with rather little explanation.
If I see that a post is heavily downvoted, and that several comments that criticize the post are heavily upvoted (as is the case here), I assume that people who downvoted the post generally agree with those criticisms.
In fact, I actively like the "upvoting critical comments" form of explanation if someone thinks that existing comments basically cover what they wanted to say. Otherwise, you get a Twitter-esque pile-on where a dozen people all make very similar critical comments.
(Is there some way you wish people would behave that you think avoids this scenario and the "low-explanation downvote" scenario?)
I may publish a separate comment on this post, but I thought Michelle's critique was good and upvoted. And I downvoted this post based on my main voting criterion: "Would I want to see more content like this on the EA Forum?"
The answer is: "No, I don't want to see more content like this on the EA Forum." I think it generated much more heat than light, and there were many better ways to make the same point. I might not have downvoted the post had it been written by someone who was clearly new to the movement, for the reasons you outlined, but Sanjay isn't new. On the contrary, he's written many good posts that I upvoted because I wanted to encourage more content along those lines.
Ideally, downvotes discourage some types of posts and comments that aren't very useful to the Forum's goals, and upvotes encourage more posts and comments that are useful. There's always a risk that someone whose post gets downvoted will be discouraged from writing other posts that could be better, but critical comments seem like they would create the same risk.
As a data point, I downvoted the original comment but removed the downvote after reading the edited version, which I think is phrased a lot better.
In cases where a comment is edited after other comments critique it, I wonder if we should gently encourage a norm of having the removed words be crossed out, rather than deleted entirely? It is of course an author's right to remove anything they no longer endorse, but it can be confusing to see comments refer to material that no longer exists.
A stark conclusion of "you're going to lose" seems like it's updating too much on a small number of examples.
For every story we hear about someone being cancelled, how many times has such an attempt been unsuccessful (no story) or even led to mutual reconciliation and understanding between the parties (no story)? How many times have niceness, community, and civilization won out over opposing forces?
(I once talked to a professor of mine at Yale who was accused by a student of sharing racist material. It was a misunderstanding. She resolved it with a single brief email to the student, who was glad to have been heard and had no further concerns. No story.)
I'm also not sure what your recommendation is here. Is it "refuse to communicate with people who espouse beliefs of type X"? Is it "create a centralized set of rules for how EA groups invite speakers"?
I agree that the distinction is often relevant. In this case, I wanted to leave the question very open-ended to encourage more answers. (I also expected people to provide details that would allow me to see how much of their engagement was "instrumental" vs. "direct".)
I think there are benefits to operating independently -- I’m reading a different set of books than others are, avoiding stressful community drama, reducing the risk of groupthink, and of course saving time.
This seems very reasonable! With one caveat: If you read any exceptionally good books, consider stopping by to tell the rest of us about them :-)
The most similar organization to High Impact Athletes that I'm aware of is Raising for Effective Giving, which recruited a lot of top-tier poker players to donate some of their winnings to EA-aligned charities.
While there are a few high-profile actors and musicians prominently linked to EA, no professional athletes come to mind for me. However, given the size and prominence of some EA-aligned charities, I'm sure they've had brushes with pro athletes in the same way other charities do. For example, Michael Phelps participated in one of the first big fundraising events from the Against Malaria Foundation.
Also: I'd be glad to post something in the EA Polls group I created on Facebook.
Because answers are linked to Facebook accounts, some people might hide their views, but at least it's a decent barometer of what people are willing to say in public. I predict that if we ask people how concerned they are about cancel culture, a majority of respondents will express at least some concern. But I don't know what wording you'd want around such a question.
Why did it take someone like me to make the concern public?
I don't think it did.
On this thread and others, many people expressed similar concerns, before and after you left your own comments. It's not difficult to find Facebook discussions about similar concerns in a bunch of different EA groups. The first Forum post I remember seeing about this (having been hired by CEA in late 2018, and an infrequent Forum viewer before that) was "The Importance of Truth-Oriented Discussions in EA".
While you have no official EA affiliations, others who share and express similar views do (Oliver Habryka and Ben Pace come to mind; both are paid by CEA for work they do related to the Forum). Of course, they might worry about being cancelled, but I don't know either way.
I've also seen people freely air similar opinions in internal CEA discussions without (apparently) being worried about what their co-workers would think. If they were people who actually used the Forum in their spare time, I suspect they'd feel comfortable commenting about their views, though I can't be sure.
I also have direct evidence in the form of EAs contacting me privately to say that they're worried about EA developing/joining CC, and telling me what they've seen to make them worried, and saying that they can't talk publicly about it.
I've gotten similar messages from people with a range of views. Some were concerned about CC, others about anti-SJ views. Most of them, whatever their views, claimed that people with views opposed to theirs dominated online discussion in a way that made it hard to publicly disagree.
My conclusion: people on both sides are afraid to discuss their views because taking any side exposes you to angry people on the other side...
...and because writing for an EA audience about any topic can be intimidating. I've had people ask me whether writing about climate change as a serious risk might damage their reputations within EA. Same goes for career choice. And for criticism of EA orgs. And other topics, even if they were completely nonpolitical and people were just worried about looking foolish. Will MacAskill had "literal anxiety dreams" when he wrote a post about longtermism.
As far as I can tell, comments around this issue on the Forum fall all over the spectrum and get upvoted in rough proportion to the fraction of people who make similar comments. I'm not sure whether similar dynamics hold on Facebook/Twitter/Discord, though.
I have seen incidents in the community that worried me. But I haven't seen a pattern of such incidents; they've been scattered over the past few years, and they all seem like poor decisions from individuals or orgs that didn't cause major damage to the community. But I could have missed things, or been wrong about consequences; please take this as N=1.
Having read through them, I'm still not convinced that today's conditions are worse than those of other eras. It is very easy to find horrible stories of bad epistemics now, but is that because there are more such stories per capita, or because more information is being shared per capita than ever before?
(I should say, before I continue, that many of these stories horrify me — for example, the Yale Halloween incident, which happened the year after I graduated. I'm fighting against my own inclination to assume that things are worse than ever.)
Take John McWhorter's article. Had a professor in the 1950s written a similar piece, what fraction of the academic population (which is, I assume, much larger today than it was then) might have sent messages to them about e.g. being forced to hide their views on one of that era's many taboo subjects? What would answers to the survey in the article have looked like?
Or take the "Postcard from Pre-Totalitarian America" you referenced. It's a chilling anecdote... but also seems wildly exaggerated in many places. Do those young academics actually all believe that America is the most evil country, or that the hijab is liberating? Is he certain that none of his students are cynically repeating mantras the same way he did? Do other professors from a similar background also think the U.S. is worse than the USSR was? Because this is one letter from one person, it's impossible to tell.
Of course, it could be that things really were better then, but the lack of data from that period bothers me, given the natural human inclination to assume that one's own time period is worse than prior time periods in various ways. (You can see this on Left Twitter all the time when today's economic conditions are weighed against those of earlier eras.)
But whether this is the worst time in general isn't as relevant as:
If you're not sure whether EA can avoid sharing this fate, shouldn't figuring that out be like your top priority right now as someone specializing in dealing with the EA culture and community, instead of one out of "50 or 60 bullet points"?
Taking this question literally, there are a huge number of fates I'm not sure EA can avoid sharing, because nothing is certain. Among these fates, "devolving into cancel culture" seems less prominent than other failure conditions that I have also not made my top priority.
This is because my top priority at work is to write and edit things on behalf of other people. I sometimes think about EA cultural/community issues, but mostly because doing so might help me improve the projects I work on, as those are my primary responsibility. This Forum post happened in my free time and isn't connected to my job, save for that my job led me to read that Twitter thread in the first place and has informed some of my beliefs.
(For what it's worth, if I had to choose a top issue that might lead EA to "fail", I'd cite "low or stagnant growth," which is something I think about a lot, inside and outside of work.)
There are people whose job descriptions include "looking for threats to EA and trying to plan against them." Some of them are working on problems like the ones that concern you. For example, many aspects of 80K's anonymous interview series gets into questions about diversity and groupthink (among other relevant topics).
Of course, the interviews are scattered across many subjects, and many potentially great projects in this area haven't been done. I'd be interested to see someone take on the "cancel culture" question in a more dedicated way, but I'd also like to see someone do this for movement growth, and that seems even more underworked to me.
I know some of the aforementioned people have read this discussion, and I may send it to others if I see additional movement in the "cancel culture" direction. (The EA Munich thing seems like one of a few isolated incidents, and I don't see a cancel-y trend in EA right now.)
If it's only a tiny nudge, why are we talking about it?
I'm talking about something I considered a tiny nudge because I thought that a lot of people, including people who are pretty influential in communities I care about it, either reacted uncharitably or treated the issue as a much larger deal than it was.
You have ceded authority to Slate by obeying their smear-piece on Hanson. Hanson is one of our people, you left him hanging in favour of what Slate thought.
To whom is "you" meant to refer? I don't work on CEA's community health team and I've never been in contact with EA Munich about any of this.
I also personally disagreed with their decision and (as I noted in the post) thought the Slate piece was really bad. But my disagreeing with them doesn't mean I can't try to think through different elements of the situation and see it through the eyes of the people who had to deal with it.
Just to clarify, are you arguing that the Hanson thread shouldn't have been posted, because it would qualify as casual discussion of bigotry? You reference the thread at the beginning, but I'm unclear whether it's an example of what you discuss later.
This isn't at all what I was trying to say. Let me try to restate my point:
"If you want to have an accurate view of people say will help them flourish in the community, you're more likely to achieve that by talking to a lot of people in the community."
Of course, what people claim will help them flourish may not actually help them flourish, but barring strong evidence to the contrary, it seems reasonable to assume some correlation. If members of a community differ on what they say will help them flourish, it seems reasonable to try setting norms that help as many community members as possible (though you might also adjust for factors like members' expected impact, as when 80,000 Hours chooses a small group of people to advise closely).
Whether EA Munich decides to host a Robin Hanson talk hardly qualifies as "the EA community deciding to exclude Robin Hanson and being more inclusive towards Slate journalists," save in the sense that what eight people in one EA group do is a tiny nudge in some direction for the community overall. In general, the EA community tends to treat journalists as a dangerous element, to be managed carefully if they are interacted with at all.
For example, the response to Scott Alexander's near-doxxing (which drew much more attention than the Hanson incident) was swift, decisive, and near-unified in favor of protecting speech and unorthodox views from those who threatened them. To me, that feels much more representative of the spirit of EA than the actions of, again, a single group (who were widely criticized afterward, and didn't get much public support).
Specifically, I would be surprised if there was much evidence of EAs/CEA being more cautious about publicly discussing 'woke' views out of fear of offending liberals or conservatives.
I hear frequently from people who express fear of discussing "woke" views on the Forum or in other EA discussion spaces. They (reasonably) point out that anti-woke views are much more popular, and that woke-adjacent comments are frequently heavily downvoted. All I have is a series of anecdotal statements from different people, but maybe that qualifies as "evidence"?
Had EA Munich hosted Hanson and then been attacked by people using language similar to that of the critics in Hanson's Twitter thread, I may well have written a post excoriating those people for being uncharitable. I would prefer if we maintained a strong norm of not creating personal risks for people who have to handle difficult questions about speech norms (though I acknowledge that views on which questions are "difficult" will vary, since different people find different things obviously acceptable/unacceptable).
I find it interesting that you thought "diversity" is a good shorthand for "social justice", whereas other EAs naturally interpreted it as "intellectual diversity" or at least thought there's significant ambiguity in that direction. Seems to say a lot about the current moment in EA...
I don't think it says much about the current moment in EA. It says a few things about me:
That I generated the initial draft for this post in the middle of the night with no intention of publishing
That I decided to post it in a knowingly imperfect state rather than fiddling around with the language at the risk of never publishing, or publishing well after anyone stopped caring (hence the epistemic status)
That I spend too much time on Twitter, which has more discussion of demographic diversity than other kinds. Much of the English-speaking world also seems to be this way:
For example if there is a slippery slope towards full-scale cancel culture, then your only real choices are to slide to the bottom or avoid taking the first step onto the slope.
Is there such a slope? It seems to me as though cultures and institutions can swing back and forth on this point; Donald Trump's electoral success is a notable example. Throughout American history, different views have been cancel-worthy; is the Overton Window really narrower now than it was in the 1950s? (I'd be happy to read any arguments for this being a uniquely bad time; I don't think it's impossible that a slippery slope does exist, or that this is as bad as cancel culture has been in the modern era.)
It scares me though that someone responsible for a large and prominent part of the EA community (i.e., the EA Forum) can talk about "getting the right balance" without even mentioning the obvious possibility of a slippery slope.
If you have any concerns about specific moderation decisions or other elements of the way the Forum is managed, please let me know! I'd like to think that we've hosted a variety of threads on related topics while managing to maintain a better combination of civility and free speech than almost any other online space, but I'd be surprised if there weren't ways for us to improve.
As for not mentioning the possibility ; had I written for a few more hours, there might have been 50 or 60 bullet points in this piece, and I might have bounced between perspectives a dozen more times, with the phrase "slippery slope" appearing somewhere. As I said above, I chose a relatively arbitrary time to stop, share what I had with others, and then publish.
I'm open to the possibility that a slippery slope is almost universal when institutions and communities tackle these issues, but I also think that attention tends to be drawn to anecdotes that feed the "slippery slope" narrative, so I remain uncertain.
What are some specific things that make you believe this, outside the single decision by EA Munich referenced in this post? Regarding the end of my reply to Wei Dai, I'd be interested to see your list of "elements of concern" on this point.
Of the scenarios you outline, (2) seems like a much more likely pattern than (1), but based on my knowledge of various leaders in EA and what they care about, I think it's very unlikely that "full-scale cancel culture" (I'll use "CC" from here) evolves within EA.
Some elements of my doubt:
Much of the EA population started out being involved in online rationalist culture, and those norms continue to hold strong influence within the community.
EA has at least some history of not taking opportunities to adopt popular opinions for the sake of growth:
Rather than leaning into political advocacy or media-friendly global development work, the movement has gone deeper into longtermism over the years.
80,000 Hours has mostly passed on opportunities to create career advice that would be more applicable to larger numbers of people.
Obviously, none of these are perfect analogies, but I think there's a noteworthy pattern here.
The most prominent EA leaders whose opinions I have any personal knowledge of tend to be quite anti-CC.
EA has a strong British influence (rather than being wholly rooted in the United States) and solid bases in other cultures; this makes us a bit less vulnerable to shifts in one nation's culture. Of course, the entire Western world is moving in a "cancel culture" direction to some degree, so this isn't complete protection, but it still seems like a protective factor.
I've also been impressed by recent EA work I've seen come out of Brazil, Singapore, and China, which seem much less likely to be swept by parallel movements than Germany or Britain.
I maybe should have said something like "concerns related to social justice" when I said "diversity." I wound up picking the shorter word, but at the price of ambiguity.
You'd expect having a wider range of speakers to increase intellectual diversity — but only as long as hosting Speaker A doesn't lead Speakers B and C to avoid talking to you, or people from backgrounds D and E to avoid joining your community. The people I referred to in the last section feel that some people might feel alienated and unwelcome by the presence of Robin as a speaker; they raised concerns about both his writing and his personal behavior, though the latter points were vague enough that I wound up not including them in the post.
A simple example of the kind of thing I'm thinking of (which I'm aware is too simplistic to represent reality in full, but does draw from the experiences of people I've met):
A German survivor of sexual abuse is interested in EA Munich's events. They see a talk with Robin Hanson and Google him to see whether they want to attend. They stumble across his work on "gentle silent rape" and find it viscerally unpleasant. They've seen other discussion spaces where ideas like Hanson's were brought up and found them really unpleasant to spend time in. They leave the EA Munich Facebook group and decide not to engage with the EA community any more.
There are many reasons someone might want to engage more or less with EA based on the types of views discussed within the community. Different types of deplatforming probably lead to different types of diversity being more or less present, and send different kinds of signals about effective altruism into the wider world.
For example, I've heard one person (who was generally anti-deplatforming) argue against ever promoting events that blend EA and religious thought, because (as best I understood it) they saw religious views as antithetical to things they thought were important about EA. This brings up questions like:
Will the promotion of religion-aligned EA events increase or decrease the positive impact of EA, on net?
There are lots of trade-offs that make this hard to figure out.
Is it okay for an individual EA group to decline to host a speaker if they discover that the speaker is an evangelical Christian who wrote some grisly thought experiments about how Hell might work, if it were real? Even if the event was on an unrelated topic?
This seems okay to me. Again, there are trade-offs, but I leave it to the group to navigate them. I might advise them one way or another if they asked me, but whatever their decision was, I'd assume they did what made the most sense to them, based partly on private information I couldn't access.
As a movement, EA aims to have a lot of influence across a variety of fields, institutions, geographic regions, etc. This will probably work out better if we have a movement that is diverse in many ways. Entertaining many different ideas probably lets us make more intellectual progress. Being welcoming to people from many backgrounds gives us access to a wider range of ideas, while also making it easier for us to recruit more people*, spread our ideas to more places, etc. On the other hand, if we try to be welcoming by restricting discussion, we might lead the new people we reach to share their ideas less freely, slowing our intellectual progress. Getting the right balance seems difficult.
I could write much more along these themes, but I'll end here, because I already feel like I'm starting to lose coherence.
*And of course, recruiting more people overall means you get an even wider range of ideas. Even if there aren't ideas that only people from group X will have, every individual is a new mind with new thoughts.
I find that EA concerns often transcend politics, and so I would expect two EAs with very different political views to be able to have more productive discussions on controversial topics than two non-EAs.
I think this is true, but even if EA discussion might be more productive, I still think trade-offs exist in this domain. Given that the dominant culture in many intellectual spaces holds that public discussion of certain views is likely to cause harm to people, EA groups risk appearing very unwelcoming to people in those spaces if they support discussion of such views.
It may be worthwhile to have these discussions anyway, given all the benefits that come with more open discourse, but the signal will be sent all the same.
Yes, you could rephrase it that way. I've spoken directly to the people who think we should be more cautious/attentive, but only heard secondhand from them about the people who think this is a bad idea (and have talked to lots of community members about these topics -- I've met people with views all over the spectrum who haven't had as many such conversations).
I was referring mostly to the comments that popped up in the various Twitter threads surrounding the decision, one of which I linked at the top of the piece. A few quotes along these lines:
"Effective altruism has been shown to be little more than the same old successor-ideology wearing rationalism as a skin-suit."
"They believe they are in a war and the people like Hanson are the enemy."
"If EA starts worrying about PR and being inoffensive, what even is the point anymore? Make EA about EA, not about signaling."
"There always was something 'off' about so-called effective altruism."
Some of these types of comments probably come from people who never liked or cared about EA much and are just happy to have something to criticize. But I sometimes see similar remarks from people who are more invested in EA and seem to think it's become much more censorious over time. While there is some truth to that (as I mention in the piece), I think the overall picture is much more complicated than these kinds of claims make it out to be.
Regarding trade-offs, that would be a much longer post. You could check the "Diversity and Inclusion" tag, which includes some Forum posts along similar themes. Kelsey Piper's writing on "competing access needs" is also relevant.
Thanks for the feedback. I think the word "missteps" is too presumptive for the reasons you outlined, and I've changed it to "decisions." I also added a caveat noting that the controversies he's provoked may lead to his ideas becoming better-known generally (though it's really hard to determine the overall effect).
I'd help to fund that study (at least, the anti-donation study) if someone put it together. Have you or the anonymous commenter proposed it to Schwitzgebel? For all I know, he might react very enthusiastically.
(If no one's asked him yet, would you mind if I passed this comment along?)
I reposted this because I thought it was interesting, but I don't agree with everything Schwitzgebel says. I certainly don't do all of the good things that should be "easy" for me to do, morally or otherwise. (I've gained a lot of weight in quarantine, for one.)
If I had to say something I do believe, and which Schwitzgebel's post reminds me of, I'd go for "some kinds of behavior are more amenable to change than we might think." That doesn't make moral behavior change easy, but it does seem to exist in a different category than calculus or rock climbing.
You flake, you run late, you disappoint someone, you don't quite carry your load in something today, because it's not convenient.
There are many ways someone could try to get better at not doing these things, and many of those ways would probably work (unlike ways one might train for El Capitan, if one is an aging academic).
This distinction does seem relevant to me. And I'd guess that many people on this forum have changed their moral behavior for the better at multiple points in their lives; some became vegan, some began to donate more, some just became kinder and more charitable people.
What is the difference between people who did these things and people who haven't yet? Some of this may come down to circumstances outside of someone's control (e.g. not becoming vegan for health reasons, not donating because it really isn't affordable), but some of it seems to come down to "choosing not to be" in the Schwitzgebelian sense.
I don't think this piece reveals anything too surprising, and there's no single reaction I'd expect every reader to have. But I've found myself being more patient (choosing to be more patient?) since I read it, and I thought there was some truth in the piece.
Thanks for sharing this! I'm always excited by opportunities for EA-aligned researchers to get feedback from non-EAs in their fields, whether through conferences, journal submissions, or some other means. I sometimes worry that it's hard for us to judge the quality of our own work as a community, and these kinds of conferences can help.
Note from a moderator: The first paragraph of this response was within the Forum's norms, but the second paragraph was dicey. If you respond to posts you see as rude by echoing their language and tone, the discourse has nowhere to go but down. It's better to try improving the tenor of the conversation.
Additionally, the OP could be seen as rude, but it doesn't seem to be targeting anyone in particular. However, to me, this reply seems to be rude toward a particular person (Dony).
This does seem like a very good initiative, though I wonder whether grant funding might make more sense than small prizes. Jason Crawford got $75,000 from Open Phil (and also an EA Funds grant), though I don't know how many bloggers may have applied to EA Funds/been looked at by Open Phil and then turned down.
With Prize funding, we could support roughly $22,000 in grant funds per year, which could provide solid incentives to... a dozen bloggers, maybe? Something to think on.
Are there any blogs/newsletters/etc. that you'd fund if you had the money to run a Tyler Cowen-like initiative for EA writing?
That post seems totally fine to me. I don't see these methods as being too much more implicitly beneficial to certain candidates than approval voting is. And it's hard to talk about political reform without thinking about which groups might benefit. I'd just not want to frontpage people who argue that reform X is good only because it will elect the awesome Y party and kick out the evil Zs.
Many EA orgs probably do this kind of analysis to determine which audiences they aim to reach with their own work; CEA and 80K are two examples. While I'm not aware of any published work that goes into ITN analysis on particular audiences, you can probably infer what organizations believe about who to recruit by observing their behavior.
(That said, it would be nice if more explicit analyses were available!)