EA Organization Updates: October 2020 2020-11-22T20:37:13.225Z
New tags, by popular demand: "Motivation" and "Funding Request" 2020-11-17T07:17:04.307Z
EA Forum Prize: Winners for September 2020 2020-11-05T06:23:55.290Z
EA Forum Prize: Winners for August 2020 2020-11-05T06:22:16.737Z
Progress Open Thread: November 2020 2020-11-02T11:06:47.609Z
Open and Welcome Thread: November 2020 2020-11-02T11:04:41.907Z
EA Organization Updates: September 2020 2020-10-21T16:19:00.064Z
Progress Open Thread: October // Student Summit 2020 2020-10-21T13:10:08.403Z
Forum update: New features (October 2020) 2020-10-21T08:49:16.198Z
EA Forum Prize: Winners for July 2020 2020-10-08T09:16:05.801Z
EA Forum Prize: Winners for June 2020 2020-10-08T09:12:11.104Z
If you like a post, tell the author! 2020-10-06T17:26:59.369Z
Open and Welcome Thread: October 2020 2020-10-02T07:00:00.000Z
EA Organization Updates: August 2020 2020-09-20T18:18:08.452Z
The two-minute EA Forum feedback survey 2020-09-18T11:27:08.241Z
New report on how much computational power it takes to match the human brain (Open Philanthropy) 2020-09-15T01:06:19.994Z
Sign up for the Forum's email digest 2020-09-14T09:23:30.438Z
How have you become more (or less) engaged with EA in the last year? 2020-09-08T18:28:08.264Z
Open and Welcome Thread: September 2020 2020-09-01T16:38:54.714Z
Forum update: New features (August 2020) 2020-08-28T16:26:34.914Z
Some thoughts on the EA Munich // Robin Hanson incident 2020-08-28T11:27:55.716Z
EA Organization Updates: July 2020 2020-08-24T09:34:15.798Z
It's Not Hard to Be Morally Excellent; You Just Choose Not To Be 2020-08-24T06:23:49.072Z
Politics on the EA Forum 2020-08-07T07:37:29.321Z
Misconceptions about effective altruism (80,000 Hours) 2020-08-07T01:26:05.364Z
Toby Ord: Selected quotations on existential risk 2020-08-06T17:41:01.088Z
Open and Welcome Thread: August 2020 2020-08-03T08:06:50.759Z
EA Forum update: New editor! (And more) 2020-07-31T11:06:40.587Z
The one-minute EA Forum feedback survey 2020-07-30T09:37:20.448Z
Utility Cascades 2020-07-29T07:16:03.528Z
Why I Give (Zoe Savitsky) 2020-07-27T11:29:09.574Z
Utilitarianism with and without expected utility 2020-07-24T06:40:37.731Z
EA Forum Prize: Winners for May 2020 2020-07-21T07:53:22.654Z
A list of good heuristics that the case for AI X-risk fails 2020-07-16T09:56:21.188Z
Stripe's first negative emissions purchases 2020-07-16T01:13:53.049Z
EA Organization Updates: June 2020 2020-07-16T00:29:27.031Z
Study results: The most convincing argument for effective donations 2020-06-28T22:45:28.216Z
How should we run the EA Forum Prize? 2020-06-23T11:15:22.974Z
80K Podcast: Stuart Russell 2020-06-23T01:46:24.584Z
EA Organization Updates: May 2020 2020-06-22T10:44:05.660Z
EA Forum feature suggestion thread 2020-06-16T16:58:58.569Z
Modeling the Human Trajectory (Open Philanthropy) 2020-06-16T09:27:46.241Z
EA Forum Prize: Winners for April 2020 2020-06-08T09:07:31.471Z
Forum update: Tags are live! Go use them! 2020-06-01T16:26:12.900Z
EA Handbook, Third Edition: We want to hear your feedback! 2020-05-28T04:31:27.031Z
Helping those who need it 2020-05-27T11:08:48.393Z
Why effective altruism? 2020-05-27T10:40:20.305Z
Improving the world 2020-05-27T10:39:47.207Z
Evidence and reasoning 2020-05-27T10:37:59.045Z
Excitement, hope, and fulfillment 2020-05-27T10:29:02.817Z


Comment by aarongertler on Where are you donating in 2020 and why? · 2020-11-25T22:28:55.205Z · EA · GW

This is a totally reasonable answer. 

"Where are you donating in 2020?" = Nowhere

"And why?" = Because you're not sure about some relevant issues and are instead saving money to donate later

If only people who were donating answered this question, readers would get a skewed view of the community // how many people are saving to give later. I'm glad that isn't happening!

Comment by aarongertler on Where are you donating in 2020 and why? · 2020-11-24T22:16:39.053Z · EA · GW

When I discuss EA in public, I try to focus more on general principles ("some charities are better than others", "it's important to think about all the ways you could help") than specifically advocating for global dev work, though the latter does happen too.

And if someone sends me a private question about giving (which happens a lot now that I've made a big deal about it), I give similarly broad advice, and will often refer people to e.g. 80K's Key Ideas page.

However, I've found over the course of the year that people seem not to care as much about the specific work of the charities I support as about the idea of doing something altruistic at all. In 2021, my public advocacy is likely to lean more meta/longtermist, though I'm not sure about the specifics.

Comment by aarongertler on Where are you donating in 2020 and why? · 2020-11-24T22:05:51.505Z · EA · GW

As for replying more directly to the arguments you linked: my views combine a bit of Khorton, a bit of both Aidan responses...

...and also, a lot of credence in most of those arguments. That's why I work a meta job, spend some of my free time on meta projects, and advise people toward meta giving when I can -- including the foundation I work with, which recently made its first meta grant after decades of exclusively near-term giving.

(By the way, this was a good question! I didn't even hint at this stuff in my original answer, and I'm glad for the chance to clarify my beliefs.)

Comment by aarongertler on Where are you donating in 2020 and why? · 2020-11-24T22:00:30.155Z · EA · GW

I respect cluelessness arguments enough that I've removed "strongly" from "strongly believe" in my response; I was just in an enthusiastic mood.

My giving to charities focused on short-term impact (and GiveWell in particular) is motivated by a few things:

  1. I believe that my work currently generates much more value for CEA than the amount I donate to other charities, which means that almost all of my impact is likely of a meta/longtermist variety. But I am morally uncertain, and place enough credence on moral theories emphasizing short-term value that I want at least a fraction of my work to impact people who are alive today.
    1. Around the time I joined CEA, I had been rapidly becoming more focused on the long term; had I taken some other non-EA job, I think that all or almost all of my donations would be going to meta causes as a way of getting long-term leverage. Instead, I get to hedge a bit with my donations.
  2. Personal/emotional factors. I sleep a bit better at night knowing that I've used my unusually lucky circumstances to provide something good for people who have been unusually unlucky. (In theory, I should also sleep worse because I've deprived longtermist projects of funding, but that isn't how my brain works for some reason.)
  3. Support for an especially well-run organization. I think that the quality of work done at GiveWell (from their charity reports and shared spreadsheets to their Mistakes page) puts them in a class of their own within EA, and I think that having orgs like this is a good thing for EA as a whole; to paraphrase Thomas Callaghan, "quality has a quality all its own." To the extent that GiveWell is a flagship org within the broader EA movement, one which will be the first introduction to EA for many people, I think it's good on a meta level for them to have more resources even if the marginal impact of those resources is lower than it might be for newer/smaller orgs.
    1. I should clarify that my GiveWell donation will be going towards their operations, not the Maximum Impact Fund. If this helps them e.g. advertise more widely, I think that's a solid meta investment.
    2. This view is informed largely by my own experience; I would have taken much longer to enter EA (if I'd entered at all) had GiveWell not been around to show me that "yes, this movement can produce high-quality research in a way you can actually verify, and it's obvious that our charities crush much of the competition dollar-for-dollar."

I should also clarify that, by "excellent", I don't so much mean "extremely high impact" as "high standard of quality in how the organization is run". 

Of course, that makes me Charity Navigator, so perhaps I should choose a different word.

Anyway, I've more than filled my "short-term bucket" for this year and next; my future winnings will probably go to smaller projects (if I have time to evaluate them) or other potential "flagship" orgs like 80,000 Hours -- which now seems to be the most common entry point into EA for people, serving the role that GiveWell did back when I got into EA. But this will, as before, depend on how well I think I can pitch them to a non-EA audience.

Comment by aarongertler on Progress Open Thread: November 2020 · 2020-11-24T21:20:02.872Z · EA · GW


Comment by aarongertler on Progress Open Thread: November 2020 · 2020-11-23T18:04:21.405Z · EA · GW

Myanmar has eliminated trachoma

In 2005, trachoma was responsible for 4% of all cases of blindness in Myanmar. By 2018, the prevalence of trachoma was down to a mere 0.008% with trachoma no longer a public health problem.

Meanwhile, Maldives and Sri Lanka have both eliminated rubella. This appears in the same article as the Myanmar news, because... 

...I guess eliminating diseases nationwide is so commonplace that we don't even need separate updates for each country/disease? That seems like a good thing.

Comment by aarongertler on Where are you donating in 2020 and why? · 2020-11-23T15:28:39.330Z · EA · GW

Thanks for posting this, Michael!

I'll be giving most of my donations ($25,000+) to GiveWell this year, with a smattering going to other global health charities (~$5000 split between AMF, GiveDirectly, Development Media International, and a few others). This amounts to roughly 22% of my income.

This plan isn't meant to be optimized for direct impact. Because I only give a modest amount, I expect most of my impact to come from influencing others, so I try to optimize for "giving in a way that I'm excited to share". 

Specifically, almost all of this year's giving comes from my success in tournament-level Magic: the Gathering, as well as revenues from streaming and exhibition events that followed from those tournaments. I publicized my choice to donate half of my tournament/streaming revenue, and I assumed that this would be most motivating/inspiring if I gave to charities with clear paths to impact which my viewers could easily understand. 

(Of course, I also believe that these charities are excellent, and I gave them heavy support before I ever became a streamer.)

So far, I've had modest success in building leverage through public donations. Someone claims to have matched my GiveWell donation (I haven't verified this myself, but James Snowden did thank them, which is something), and one EAGxAsia-Pacific attendee told me they discovered EA at least in part because they saw me discuss it on Magic streams.

I also give $100/month to the EA Infrastructure Fund, partly because I think meta work is the highest-leverage way to donate and partly so I can have the direct experience of being an EA Funds donor. Because I work for CEA, I'd like to "eat my own dog food" (use the products I work on) in a few different ways.

Comment by aarongertler on nickmatt's Shortform · 2020-11-20T21:30:37.314Z · EA · GW

I just misremembered the official name of the trilogy -- Remembrance of Earth's Past is correct.

Comment by aarongertler on The Case for Space: A Longtermist Alternative to Existential Threat Reduction · 2020-11-18T21:46:01.359Z · EA · GW

Just because something is difficult, doesn't mean it isn't worth trying to do, or at least trying to learn more about so you have some sense of what to do. Calling something "unknowable" -- when the penalty for not knowing it "civilization might end with unknown probability" -- is a claim that should be challenged vociferously, because if it turns out to be wrong in any aspect, that's very important for us to know.

I cannot imagine future humans being so stupid as to have AI connected to the internet and a robot army able to be hijacked by said AI at the same time.

I'd recommend reading more about how people worried about AI conceive of the risk; I've heard zero people in all of EA say that this scenario is what worries them. There are many places you could start: Stuart Russell's "Human Compatible" is a good book, but there's also the free Wait But Why series on superintelligence (plus Luke Muehlhauser's blog post correcting some errors in that series).

There are many good reasons to think that AI risk may be fairly low (this is an ongoing debate in EA), but before you say one side is wrong, you have to understand what they really believe.

Comment by aarongertler on What are some quick, easy, repeatable ways to do good? · 2020-11-18T20:57:26.217Z · EA · GW

Going for a walk in an area with some litter, carrying a small trash bag with you and wearing gloves. Cleaning up litter is an extremely visual/sensory way of making an environment better, and while the payoff isn't too high (other people have nicer walks), the effort + sense of accomplishment make it a big mood-lifter for me.

I also second Denise's thoughts on calling elderly relatives, and would extend that to "getting in touch with someone who seems to need a friend" -- whether that's a parent, an out-of-touch sibling, or an old friend you haven't seen in a while and suspect might want some company.

Comment by aarongertler on nickmatt's Shortform · 2020-11-18T20:54:10.725Z · EA · GW

This is an interesting idea; thanks for writing it up!

I consider myself a "longtermist" and have read a lot of science fiction, but most examples I've seen in sci-fi of people affecting the future don't feel convincing to me. To comment on some of your examples:

  • "Psychohistory" (Foundation) is fine as "the premise you have to accept to read the book", but is very silly as an actual concept.
  • The Dark Forest trilogy (Three-Body Problem, etc.) has a few sections where we skip ahead into the future and see how things have developed, but the explanations for that development typically take up roughly a page and use a style of historical narrative that doesn't really match what longtermist work feels like (rather than  a few people or groups achieving influence, IIRC the explanation tends to be "humans became more like X, ergo thing Y happened"). 

I feel like I've read a couple of books that really capture the process of changing the world over centuries, and the kinds of work that go into those changes, but no titles are coming to mind. 

What I've found more helpful: Books that capture what it's like to be at the mercy of some overwhelming, civilization-shattering force. These include On the Beach and Alas, Babylon (nuclear war) and books with realistically superintelligent or super-advanced aliens (the Dark Forest trilogy does pretty well here). I think people often struggle to understand how it would feel to contend with something fundamentally smarter than humans, and sci-fi gets that scary feeling across quite well.

Comment by aarongertler on What has EA Brazil been up to? · 2020-11-18T20:18:34.231Z · EA · GW

Regarding your ideas at the end:

  • A newsletter seems like a pretty good balance of work and impact. It's free to run as an experiment, and you can easily stop if people don't engage much. But I really like projects that regularly ping people with EA content; people will often start engaging quite suddenly (e.g. after they get a new job that leaves them with more money to donate), so it's nice to keep them aware until that happens.
  • A podcast seems similar to a newsletter; you get good data back, and it's easy to try a few experimental episodes without spending much (at least if the person running it already has the necessary equipment).
  • Streaming seems tougher to pull off; it's harder to be entertaining and informative in a live setting, and I don't think "idea" streaming has ever been very popular. 
  • Hiring a PR agency seems very risky; it's relatively expensive and hard to track impact for.
  • Networking with researchers: I don't have anything to add to Gavin's comment.
Comment by aarongertler on Archer's Shortform · 2020-11-18T09:55:02.405Z · EA · GW

Three cheers for long Shortform posts! Totally fine to spell out a half-baked idea here, at whatever length.

Anyway, one of the first questions I always want to ask when I hear a business idea: What's an example of this type of business succeeding?

Clearly, there are successful people writing newsletters on Substack. But did any of them:

a) Start with a fairly small audience, many/most of whom were already giving them money without expecting something in return?

b) Try to crowdsource content from many sources, instead of having a single author be the driving force/personality behind the newsletter?

There may be newsletters in this category, but I expect that they are quite rare.

Additionally, most EA orgs actively want their ideas to be free, because they want as many people as possible to hear them, and this is more valuable to them than whatever money they would get from a much smaller paid audience. For example, even if 80K could convert their 100,000+ newsletter subscribers into an audience of 10,000 people paying $5/month (no small task), I don't know if they would want to, given how many fewer people would get job leads from them under that scenario.


As far as newsletters created by individual entrepreneurs, this is a reasonable business idea like many others. You can find lots of online guides to building an audience for your copywriting, coaches to help you get started, clubs where people share feedback on each other's writing, and so on. But like most reasonable businesses, this one is fairly competitive and tough to succeed in (no such thing as a free lunch!). It will be a reasonable thing to do for a few EAs, maybe, but doesn't stand out to me as more promising than other types of startups. 

This doesn't make it a bad idea -- just one of many, many things that people should consider if they want to build a business.

Comment by aarongertler on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2020-11-18T09:42:51.138Z · EA · GW

Here's a collection of quotes tagged by Goodreads users with "effective altruism", which I assume they often tagged after finding them inspirational.

Comment by aarongertler on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2020-11-18T09:41:45.349Z · EA · GW

George Bernard Shaw (on moral reflection/holding unusual values): 

"The fact that we can become accustomed to anything [...] makes it necessary to examine everything we are accustomed to."

Nick Bostrom (on seeing the better world we could have): 

“Utopia is the hope that the scattered fragments of good that we come across from time to time in our lives can be put together, one day, to reveal the shape of a new kind of life.”

Bryan Caplan (on epistemic modesty): 

"Look in the mirror. You don't know the best way to deal with Russia."

Arcade Fire, "Month of May":

I said some things are pure, and some things are right

But the kids are still standing with their arms folded tight


Well, I know it's heavy, I know it ain't light

But how you gonna lift it with your arms folded tight?

J.R.R. Tolkein:

“I wish it need not have happened in my time," said Frodo.

"So do I," said Gandalf, "and so do all who live to see such times. But that is not for them to decide. All we have to decide is what to do with the time that is given us.”

Edna St. Vincent Millay, "Dirge Without Music" (on defeating death):

Down, down, down into the darkness of the grave

Gently they go, the beautiful, the tender, the kind;

Quietly they go, the intelligent, the witty, the brave.

I know. But I do not approve. And I am not resigned.

Comment by aarongertler on It's OK to feed stray cats · 2020-11-16T09:30:06.921Z · EA · GW

I can vouch for the value of this approach. My apartment complex has an informal back path (paved by generations of feet) that people use to get to the nearby university. It also gets some occasional use from the local unhoused population. Over time, lots of litter had built up in a certain patch (you could hardly see the ground).

I noticed that I felt a sense of annoyance every time I walked through the tiny valley of trash (not annoyance at humans, but at the interruption in what was otherwise a nice miniature nature walk). So one day I bought some surgical gloves and trash bags, put on a podcast, and cleaned the path. It took less than two hours to remove 98% of the litter (the rest being things like bottle caps that would have been laborious to track down and collect). 

The result: I got a clean path, a few hundred other people got a clean path, and I got to think of myself as "the kind of person who cleans up the commons," which was more personally satisfying than any donation I made that year (because I am irrational).

Comment by aarongertler on What are some EA-aligned statements that almost everyone agrees with? · 2020-11-11T08:49:40.701Z · EA · GW

Some of the items about it being generally good/productive to help people felt redundant -- not sure whether that's an issue for your research.

  • I think everyone deserves equal access to opportunity
  • Fairness is very important to me
  • I think everyone should have access to basic necessities, like food and water

This cluster of items could be seen as somewhat political (especially "equal access to opportunity"). I think they may not be as universal as you'd think (though when presented in a non-political situation, they might not bother even people who nominally disagree with them in political contexts).

I'd consider adding items about more specific areas, like animals and the long-term future. For example:

  • It's wrong to torture animals for our own pleasure
  • We should try to make the world a good place to live for our descendants
  • People in future generations shouldn't be punished for our mistakes
Comment by aarongertler on [deleted post] 2020-11-10T23:05:44.901Z

While this seems like good advice, a combination of "not much opportunity for individual impact" and "about politics" leads me to keep it in Personal Blog.

But I do appreciate it being written, would be glad to see it shared widely,  and so on!

Comment by aarongertler on Open and Welcome Thread: November 2020 · 2020-11-10T22:20:09.910Z · EA · GW

Welcome! I'm the lead moderator of the Forum and work on content for the Centre for Effective Altruism. I'm pretty familiar with EA literature in a lot of areas; if there's anything specific you'd like to read more about, feel free to ask.

Comment by aarongertler on Open and Welcome Thread: November 2020 · 2020-11-10T22:19:16.079Z · EA · GW

Hello, Jochem!

Don't be alarmed by resumes; the more people of all stripes participate in conversations here, the more we'll all learn.

Also, I'm a copyeditor and semi-professional Twitch streamer by trade; if I can participate on the site, research engineers should do just fine :-)

(Though of course, just hanging out and reading is also a totally valid way to use the Forum!)

Comment by aarongertler on Open and Welcome Thread: November 2020 · 2020-11-09T11:37:07.873Z · EA · GW

Hello, Ben! Welcome  to the Forum, and thanks for working on things that (presumably) have a chance of helping us not die in the next pandemic.

Comment by aarongertler on Consider paying me (or another entrepreneur) to create services for effective altruism · 2020-11-09T09:56:00.369Z · EA · GW

The type of post you described in your second bullet point would also likely be marked as "Personal Blog" if it was mostly describing past actions of the incubator and soliciting grants. If the post was mostly about various types of services needed by the community, and general thoughts on how to fund such services, it would be marked as "Frontpage". 

In some cases, it might be hard to tell which topics take up "most" of a post, but this post leaned much more toward the "Personal Blog" side.


If you want to invite wider discussion about EA services/entrepreneurial funding, you could try splitting this post into two posts; leave the personal content here, and move the general content (ideally with more detail/specific examples of services the community needs, etc.) into a new post. 

I'd be happy to move the new post to Frontpage, and you'd be welcome to link to yours and Rupert's posts from the new post. That lets you advertise your services without bending Frontpage standards.

Ian David Moss did exactly this recently. He wrote a post on EA political activism against Trump's re-election, then split it into one post about EA political engagement in general and a second post about specific election recommendations (the latter isn't allowed on Frontpage due to our rules on political posts). He then linked the posts together. I thought the setup worked well.

Comment by aarongertler on Consider paying me (or another entrepreneur) to create services for effective altruism · 2020-11-09T09:46:55.208Z · EA · GW

I wasn't trying to say either of those things. What I meant was:

  • Job posts appeal to a specific audience (people looking for jobs)
  • Posts by potential grantees appeal to a specific audience (people looking to make grant-sized donations)

I believe that the first audience is larger than the second audience, which means that presenting the former type of post on the frontpage is somewhat more reasonable. However, it's possible that neither type of post should ever be frontpage/that we should amend our categorization system in various other ways.

Comment by aarongertler on Consider paying me (or another entrepreneur) to create services for effective altruism · 2020-11-09T09:44:46.842Z · EA · GW

Two paragraphs at the beginning briefly mention the general idea that donors should consider funding entrepreneurs, but that subject is left behind for the rest of the post (until the very last line, I suppose). The post didn't feel to me like it was really inviting much discussion of entrepreneurship in general.

I won't set a hard number for what "percentage" of a post's content has to be something other than a personal funding request for it to be on the front page (that would be impossible to measure), but it felt to me like the general content was a brief addendum to the detailed personal request, rather than vice-versa. In my view (from the inherently subjective position of "Forum moderator"), that balance equates to a post being a better fit for the "Personal Blog" category. 

It's absolutely within your rights to disagree, of course! The boundaries of these categories are fuzzy.

Comment by aarongertler on Consider paying me to do AI safety research work · 2020-11-09T09:35:14.998Z · EA · GW

The post is perfectly appropriate to publish on the Forum! It's just not something that quite fits what we're looking to have at the top of the homepage. I still hope the Forum can be a helpful place to host content like this, where people can find it if they look for it in "All Posts" and you can share the link in other places.

Comment by aarongertler on Why I think the EA Community should write more fiction · 2020-11-07T01:22:22.694Z · EA · GW

Finding and sharing existing fiction/passages that gets at EA ideas seems higher-leverage than trying to produce new fiction (for the most part). People write a huge amount of fiction from many different moral perspectives, and some of it is bound to be EA-aligned in some ways.

There's also the rational fiction community, which is highly influenced by EA and shows how EA ideas could work in a variety of settings (e.g. Superman as an X-risk, improving institutional decision-making in the Pokemon universe).

Comment by aarongertler on What would a taskforce designed to minimise biorisk look like? · 2020-11-07T01:17:00.892Z · EA · GW

I assume there are many biorisk "taskforces" that already exist (between the UN and various national governments), or at least organizations that perform some of the functions you mention.

For example, the UN's Office for Disarmament Affairs has bioweapons as an "area of focus". I don't know what actual work they do, but they probably employ people who know a lot about bioweapons and at least monitor global bioweapons development to some extent. (I know exactly nothing about this area, though, so I could be wrong.)

Comment by aarongertler on davidwalker's Shortform · 2020-11-07T00:54:32.817Z · EA · GW

I feel like the main role of a bulldog is to fend off the fiery, polemical enemies of a movement. Atheism and veganism (and even AI safety, kind of) have clear opponents; I don't think the same is especially true of EA (as a collection of causes). 

There are people who argue for localism, or the impracticality of measuring impact, but I can't think of the last time I've seen one of those people have a bad influence on EA. The meat industry wants to kill animals; theists want to promote religion; ineffective charities want to... raise funds? Not as directly opposed to what we're doing.

I suppose we did have the Will MacAskill/Giles Fraser debate at one point, though. MacAskill also took on Peter Buffet in an op-ed column. I don't know how he feels about those efforts in retrospect. 

We could certainly use more eloquent/impassioned public speakers on EA topics (assuming they are scrupulous, as you say), but I wouldn't think of them as "bulldogs" -- just regular advocates.

Comment by aarongertler on EA Forum Prize: Winners for August 2020 · 2020-11-06T23:22:11.290Z · EA · GW

I think someone doing this in a post is likely to aid their chances of winning a prize, but that's not an official thing -- just based on how I'd expect judges to react (and how I might react, depending on the context). The "changed my mind" post/comment is one of several really good post/comment genres.

Comment by aarongertler on Consider paying me (or another entrepreneur) to create services for effective altruism · 2020-11-05T13:30:49.935Z · EA · GW

Note from the lead moderator:

I've moved this post back to "Personal Blog" (another moderator approved it for Frontpage originally) and will do the same for other posts of this type. This allows authors to publish and advertise them in other places without the posts taking up space on the homepage.

The Forum's moderators have had some discussion in the past on whether job listings should ever appear on Frontpage; it was a close call, but we think a few such posts once in a while is okay. However, I expect that there are many more potential job applicants than potential grantmakers on the Forum, so posts like this are less likely to be relevant to a random reader than a job listing. Hence, posts like this should be "Personal Blog" unless they involve discussion of other topics as well.

(That said, this was a nice idea, and I hope the author gets some contacts out of it!)

Comment by aarongertler on Consider paying me to do AI safety research work · 2020-11-05T13:24:43.929Z · EA · GW

Note from the lead moderator:

While posts like this are fine, I'll be moving them to "Personal Blog" from now on (including this post and Remmelt's) unless they also discuss topics outside of the author's personal request for funding.

Comment by aarongertler on Instability risks of the upcoming U.S. election and recommendations for EAs · 2020-11-03T08:14:08.461Z · EA · GW

Because today is Election Day, I'm leaving this post on Frontpage; it contains advice that could be relevant to the safety and stability of readers' communities, and is only partisan to the extent that it acknowledges one candidate's being much more likely to cause instability than the rest. (Here's our policy on political posts; this one toes the line, so I'm using discretion as a moderator.)

I may move the post to Personal Blog after election season is over; here's hoping that happens in a week rather than on January 20th.

Comment by aarongertler on Progress Open Thread: November 2020 · 2020-11-02T22:29:43.738Z · EA · GW

Togo just became the first African country to have officially (according to the World Health Organization) eliminated sleeping sickness!

Comment by aarongertler on Progress Open Thread: October // Student Summit 2020 · 2020-10-31T08:03:39.822Z · EA · GW


Comment by aarongertler on Making More Sequences · 2020-10-28T19:13:00.723Z · EA · GW

At the moment, we still plan to add more material to the initial sequence (I'm glad you liked it). 

I'm currently working with someone who is writing an "EA encyclopedia" with detailed articles on a number of key terms and concepts, which will eventually replace many of the tags on the Forum (turning this site into something like an EA Wiki). I expect the existence of those articles to make producing further introductory material much easier, and I've deprioritzed further sequence-writing until that content starts to be available.

That said, a good intro sequence is something I won't be able to produce quickly given all my other work. If someone wants to try producing part of it themselves (particularly if they're introducing a certain cause area), I'd love to see that! I can review that person's work, and what they've done could become part of an "official" introduction.

Comment by aarongertler on Why Research into Wild Animal Suffering Concerns me · 2020-10-28T19:08:00.880Z · EA · GW

While I think the future you suggest is unlikely for reasons others have articulated, I will say that I really appreciate you making this post in the first place! Challenging a popular idea can be difficult, and I appreciate the work you did to hedge your concerns:

  • Using "concerns me" in the title (rather than e.g. "Why WAS research is a bad idea")
  • Noting that you assume people interested in the field don't actually want the bad outcome you're worried about (but that despite this assumption, you still want to be sure)

I'm guessing plenty of other people who have read or will read the Forum shared this concern to some degree, and I'm glad this post gives our site's researchers a chance to explain their positions directly.

Comment by aarongertler on 4 Years Later: President Trump and Global Catastrophic Risk · 2020-10-26T16:44:34.082Z · EA · GW

In accordance with the Forum's policy on political posts, I'm keeping this one in "Personal Blog." 

However, I agree with much of this analysis, and I always appreciate it when someone takes the time to look in on a past prediction and evaluate the outcome.

Comment by aarongertler on So-Low Growth's Shortform · 2020-10-21T13:34:45.135Z · EA · GW

People have tried to estimate similar figures before. See Jeff Kaufman on dairy offsets or Gregory Lewis on meat-eating (searching the term "moral offset" will help you find other examples I haven't linked).

Some people also think this idea is conceptually bad or antithetical to EA.

Comment by aarongertler on Progress Open Thread: October // Student Summit 2020 · 2020-10-21T13:19:36.724Z · EA · GW

I'll kick things off!

This month, I finished in second place at the Magic: the Gathering Grand Finals (sort of like the world championship). I earned $20,000 in prize money and declared that I would donate half of it to GiveWell, which gave me an excuse to talk up EA on camera for thousands of live viewers and post about it on Twitter.

This has been a whirlwind journey for me; I did unexpectedly well in a series of qualifying tournaments. Lots of luck was involved. But I think I played well, and I've been thrilled to see how interested my non-EA followers are in hearing about charity stuff (especially when I use Magic-related metaphors to explain cause prioritization).

Comment by aarongertler on When does it make sense to support/oppose political candidates on EA grounds? · 2020-10-21T08:40:57.519Z · EA · GW

Sure -- that's a good thing to clarify. When I say "opposed to," I mean that it seems like the things he presently cares about don't seem connected to a cause-neutral welfare-maximizing perspective (though I can't say I know what his motivations are, so perhaps that is what he's aiming for). 

Most notably, his PAC explicitly supports an "America First immigration policy," which seems difficult to square with his espoused libertarianism and his complaints about technological slowdown in addition to being directly opposed to work from Open Phil and others. I don't understand exactly what his aims are at this point, but it feels like he's far away enough from the EA baseline that I wouldn't want to assume a motivation of "do the most good in a cause-neutral way" anymore.

Comment by aarongertler on Forecasting Newsletter: September 2020. · 2020-10-20T19:45:15.821Z · EA · GW

The newsletter itself; alas, the reality of time forces me to be selective about what I choose to read more closely. But I'm glad to have the opportunity to select things at all!

Comment by aarongertler on Some history topics it might be very valuable to investigate · 2020-10-19T09:27:55.723Z · EA · GW

Comment that came in from the EA Newsletter:

"I’m writing a PhD on alumni engagement with effective altruism as the philosophical background. I’m comparing six top 100 ranked universities in the world and their alumni engagement. The universities are Harvard, Penn State, Cambridge, Vienna, Uppsala, Helsinki. I am interested in any seminar or discussions about implementing effective altruism and historical research as I have been doing that myself for the past four years. I’m writing the PhD for the University of Helsinki for professor Laura Kolbe, I myself live on the Åland Islands." 

Pia Widén (pia.widen at

Comment by aarongertler on When does it make sense to support/oppose political candidates on EA grounds? · 2020-10-19T09:16:34.542Z · EA · GW

Was  the "at least one EA" someone in a position of influence?

My understanding is that Thiel stopped being especially interested in EA around the time he got into politics, but he might still be making AI-related donations here and there. I'd be surprised if he had wanted to speak at any recent EA Global conference, as most of his current work seems either opposed to or orthogonal to common EA positions. But I don't have any special knowledge here. (Certainly he was never Glebbed.)

Comment by aarongertler on When does it make sense to support/oppose political candidates on EA grounds? · 2020-10-19T08:43:07.503Z · EA · GW

If you're still skeptical that people are reluctant or afraid to speak positively about Trump or Republicans in general...

I never said I was skeptical that people felt this way. I'm quite certain people do feel this way, because you've said you feel it and so have others. I just wanted to hear more details about that feeling of reluctance/fear, and to express doubt that no Trump supporter would ever be willing to express that support in a public EA discussion.

It's certainly possible, even likely, that "some people" in the community would react negatively to hearing that someone was a Trump supporter, in a way that made future interactions a bit less collaborative or more fraught. But I think that's the nature of expressing strong opinions generally, in almost any community. Someone who came out as a communist would likely face similar challenges. Same for someone who was very religious, or a supporter of PETA, or a fan of Antifa. Probably not for a moderate liberal, even an outspoken one, but that's because EAs are overwhelmingly moderately liberal.

This phenomenon makes it hard to have totally open discussions on many topics, politics among them. And I agree with you that any public discussion about politics within EA could be skewed* -- but I just don't think it would be skewed to the point that an idea many people held wouldn't show up at all.

 People write controversial Forum comments all the time. People have bitter online arguments in various EA spaces. There are plenty of loud and opinionated people in the community who aren't concerned about how others will react to them (heck, anyone who wants to can make an anonymous account on the Forum -- where are the anonymous Trump supporters?).

*This is one reason I'd prefer we not have much partisan political discussion here. And if a group of people were to look for "political donation opportunities," I'd hope that they would start by looking carefully at the object-level virtues of each important candidate in a given election, without partisanship.


Can it really be that out of thousands of forum users and FB friends/followers, there is not one Trump or Republican supporter who might object to voting for Democrats on object-level grounds,

I've seen political posts from EAs I know that drew in Trump supporters who happened to be in their social networks (though I'm not sure how many of said supporters would consider themselves interested in EA). But I don't spend much time on Facebook in general, and EA Twitter doesn't have especially active political conversation in my experience (most of Rob Wiblin's recent posts have ~1 comment, and he's one of the most popular EA Twitter users). So I'm interested in your experiences (and those of other people who spend more time than I do in the relevant spaces). Are these FB/Twitter posts getting 5 comments? 10? 50?

When people respond to partisan political posts from friends they know personally, I'd expect agreeable responses to dominate. When my socialist Facebook friends post about socialism, they get a lot of responses from other socialists and very few from capitalists, even though I expect they have lots of capitalists in their social networks, and I wouldn't expect capitalists who respond to them to be worried about shunning given that capitalism is a normal position in elite spaces. I think people just don't like starting arguments with their friends over touchy subjects.

Of course, this assumes that the dynamic in play is "responding to a friend." If these are posts in discussion-oriented spaces and there are lots of responses, and the responses are all one-sided, that's stronger evidence that people don't want to speak out in support of Trump. (However, it also seems plausible that EA is so anti-Trump generally that there just aren't people around who disagree and care enough to comment, especially given how much of the community is non-American.)


As for this Forum: On the post we're now discussing, the opinionated comments are (as I type this) as follows:

  • Our back-and-forth (with Ian's contribution)
  • Your comment which links to other comments where you push back on the post
  • xccf's comment pushing back on the post and making what I see as a good-faith attempt to steelman Trump supporters
  • Ryan Carey's comment pushing back on the post
  • Linch's comment pushing back on the post (and related discussion)
  • Abraham Rowe's generally supportive comment
  • My comment pushing back on the post (though my tone was supportive)
  • Ben's comment pushing back on the post (but supporting Ian for taking the time to discuss things)
  • MarcSerna's comment pushing back on the post
  • MichaelStJules presenting some neutral thoughts/feedback
  • JTM endorsing the concept of the post and pushing for more discussion
  • Jordan Warner's comment pushing back on the post

Almost unanimously, people seem to want EA to stay out of partisan political stuff. No one aside from Ian and maybe JTM actually argued against Trump on the object level. I'm not surprised that there were no pro-Trump arguments on the object level.

Comments on the "recommendations for donating to beat Trump post" are:

  • Me noting that we won't frontpage it (and expressing support for the cause)
  • A discussion between Peter and Ian about the general case for donating vs. volunteering
  • Other comments by Peter where he mentions he'd consider donating

And... that's it. Only three unique respondents, hardly a landslide even if they all express a desire for Trump to lose the election.

On which other Forum posts would it make sense for a pro-Trump EA to discuss their support for Trump? The subject is only now coming up with the election season almost over (kbog had his "Candidate Scoring System" posts a while back, but those didn't lead to much or any partisan discussion IIRC). If it took until now for someone to write the post "supporting Democrats might be a good EA cause" and 90% of EA leans left, I'm not surprised that the post "supporting Republicans might be a good EA cause" hasn't come up.

In some posts made around the time of the 2016 election, there were a few comments pointing out potential benefits of a President Trump (see HenryMaine and Larks here). There were more anti-Trump comments, but nothing surprising given the underlying demographics of EA. I just don't think there's enough overall activity on the Forum for "no recent object-level pro-Trump comments" to mean much.

Comment by aarongertler on Charity Navigator acquired ImpactMatters and is starting to mention "cost-effectiveness" as important · 2020-10-16T09:56:30.888Z · EA · GW

This was an excellent comment and saved me a lot of time I'd otherwise have spent reading the methodology in full. Thank you for posting it!

Comment by aarongertler on When does it make sense to support/oppose political candidates on EA grounds? · 2020-10-16T09:55:03.296Z · EA · GW

Strongly upvoted. You expressed what I disliked about that paragraph much more succinctly than I was able to in my comment. (Also, I wanted to upvote some of your comments in the deleted thread, so this strong upvote is partly meant to deliver that extra karma.)

Comment by aarongertler on When does it make sense to support/oppose political candidates on EA grounds? · 2020-10-16T09:53:34.230Z · EA · GW

Is your fear of saying positive things about Trump based on how you would expect people in the EA community to react? Is it more about people elsewhere on the internet who might happen to see your views or track them down because of an unrelated grudge they hold against you?

I can easily imagine someone in the EA community taking a pro-Trump position for EA-related reasons (e.g. a belief that abortion is a hugely neglected cause area, or that gains from Trump's economic/war-avoiding policies overwhelm losses from his other policies).  What do you predict would happen to someone like that? Would you expect them to be fired if they held a position at an EA org? Barred from attending EA Global?  Shunned by people in their local group?

I'll also note that the positions in a discussion like the one Ian proposes aren't really "pro-Trump" and "anti-Trump": they are "Trump is so bad that preventing his election is a competitive EA cause area" and "no, this doesn't really measure up to other cause areas, or is otherwise a bad idea." Someone could easily argue for the latter point even if they would never vote for Trump.

(That said, if this discussion really would exclude everyone who could possibly  be  taken as a Trump supporter, that seems very unhealthy to me. I just don't think that's what would happen.)

Comment by aarongertler on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-15T06:26:42.947Z · EA · GW

Any discussion of the Munich cancellation as a potential indicator of "norms" should probably note that there are hundreds of talks by interesting thinkers each year at EA conferences/meetups around the world. At least, people I'd consider interesting, even if they don't come into conflict with social norms as regularly as Robin.

On a graph of "controversial x connection to EA," Robin is in the top corner (that is, I can't think of anyone who is both at least as controversial and at least as connected to EA,  other than maybe Peter Singer). So all these other talks may not say much about our "norm" for handling controversial speakers. But based on the organizers I know, I'd be surprised if most other EA groups (especially the bigger/more experienced ones) would have disinvited Robin.

In terms of your own feelings about contributing/collaborating in EA, do you think sentiments like those of the Munich group are common? It seems like their decision was widely criticized by lots of people in EA (even those who, like me, defended their right to make the decision/empathized with their plight while saying it was the wrong move), and supported by very few. If anything, I updated from this incident in the direction of "wow, EA people are even more opposed to 'cancel culture' than I expected."

Comment by aarongertler on jackmalde's Shortform · 2020-10-14T23:35:07.808Z · EA · GW

Therefore I think we need to give people a more realistic conception of a life that is barely worth living to wrap their heads around.

My personal mental image of the Repugnant Conclusion always involved people living more realistic/full lives, with reasonable amounts of boredom being compensated for with just enough good feelings to make the whole thing worthwhile. When I read "muzak and potatoes", my mind conjured a society of people living together as they consumed those things, rather than people in isolation chambers. But I could be unusual, and I think someone could write up a better example than Parfit's if they tried.

Comment by aarongertler on When does it make sense to support/oppose political candidates on EA grounds? · 2020-10-14T23:22:14.510Z · EA · GW

Personal context: I'm writing this as an individual rather than a moderator. I was happy to see your recommendations for political donations in the other threads, and I agree with you that the current president is much more dangerous than a more generic leader might be (though I'm not sure about "orders of magnitude"; the Iraq War was also very bad, and we missed a lot of potential climate change benefits when Al Gore didn't become president).

I generally agree with you that EA should treat politics mostly as it does other issues, though I think it's beneficial to take steps to avoid what frequently happens to other movements that become political (that is, politics sucks up more and more of their time and they lose focus on other things). This was the impetus behind the Forum's policy on political posts.

However, with the background of that general agreement, I have some questions/concerns about aspects of this post.


Many of those who push back against political engagement in EA see the left, and particularly the social justice movement, as some kind of scary juggernaut seeking world domination by mind control.

This seems like an uncharitable representation of how the median concerned Forum user expresses those concerns. I've interacted with a few people who seem to view the left as more culturally powerful than they are, but I've also had better discussions with people who had more reasonable concerns.* Using a phrase like "mind control" to describe the beliefs of one's opponents (even those on the fringes of community opinion) isn't great.

I thought the section other than this paragraph was very good, especially the point that supporting the Democratic Party doesn't equate to supporting the far left. But reading this nearly soured me on your argument in general (even though I don't think of myself as someone who "pushes back").

* For example, that while the far left isn't very numerous or popular, their support overlaps a lot with EA's core growth demographics, such that we might end up absorbing many of their norms or ideas without meaning to.


It will reduce EA's long-term impact

I have to confess I've never really understood this argument. 

Can you provide an example of someone saying this? I struggle to think of any arguments I've seen that resemble this. Maybe the idea that EA doesn't want to cut off conservative people from feeling like they can belong in the movement, because that makes it harder to grow the movement?


A better way, in my opinion, would be to have some kind of formal system to weigh the risks against the opportunities on a case-by-case basis, as well as the risks of not engaging.

In principle, a system like this could be useful, but EA doesn't have a system like this for anything right now. Even an organization like EA Funds, which evaluates opportunities to do good and recommends that some of them get support, is just a few experienced people trying to convince other people to trust their judgment. If someone wanted to, they could start doing this for political opportunities right now (and indeed, we've had people try to compare candidates before).

Within EA, individuals typically vote with their wallets, job applications, and Forum posts. There's not much in the way of coordination around "official" EA recommendations, even if some orgs are trusted by much of the community to recommend things. The equivalent of voting against community approval for candidate X (as a cause worth supporting over various other causes) is to not donate to candidate X.

So while I like the idea of an organization that reviews political campaigns to seek out opportunities for impact — as I would almost any org that reviews opportunities for impact — I don't think we'd need that organization to be uniquely "official" within EA. They'd just have to present their arguments clearly and convincingly. 

(And, I suppose, not be actively opposed by other orgs in the movement, though I don't think that would be likely to happen as long as the org in question avoided representing itself as "the official EA political charity" or something.)


Were this organization to exist, one factor that would help me personally trust it would be if their most common recommendation were "no action, this campaign doesn't seem important/tractable enough to support either candidate rather than giving to (insert nonpartisan charity)."