Racial Demographics at Longtermist Organizations 2020-05-01T16:02:16.202Z
Which Community Building Projects Get Funded? 2019-11-13T20:31:17.209Z


Comment by AnonymousEAForumAccount on How well did EA-funded biorisk organisations do on Covid? · 2021-06-04T19:36:27.269Z · EA · GW

Great question, and I look forward to following this discussion!

A tangential (but important in my opinion) comment… You write that “EA funders have funded various organisations working on biosecurity and pandemic preparedness”, but I haven’t seen any evidence that EA funders aside from Open Phil have funded biosecurity in any meaningful way. While Open Phil has funded all the organizations you listed, none of them have been funded by the LTFF, Survival and Flourishing Fund, the Centre on Long-Term Risk Fund, or BERI, and nobody in the EA Survey reported giving to any of the organizations.

The LTFF has admittedly made some small biosecurity grants (though as a reference it has granted ~19x more to AI), and FHI (which has relatively broad support from EA and/or longtermist funders) does some biosecurity work. But broadly speaking, I think it’s a (widely held) misconception that EA donors besides Open Phil were materially prioritizing biosecurity grantmaking prior to the pandemic.

Comment by AnonymousEAForumAccount on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-22T15:56:17.876Z · EA · GW

I’m glad you’ve been discouraging people from working at Leverage, and haven’t been involved with them for a long time.

In our back and forth, I noticed a pattern of behavior that I so strongly associate with Leverage (acting as if one’s position is the only “rational” one , ignoring counterevidence that’s been provided and valid questions that have been asked, making strong claims with little evidence, accusing the other party of bad faith) that I googled your name plus Leverage out of curiosity. That’s not a theory, that’s a fact (and as I said originally, perhaps a meaningless one). 

But you're right: it was a mistake to mention that fact, and I’m sorry for doing so. 

Comment by AnonymousEAForumAccount on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-22T14:31:14.052Z · EA · GW

If it’s any consolation, this was weird as hell for me too; we presumably both feel pretty gaslit at this point. 

FWIW (and maybe it’s worth nothing), this was such a familiar and very specific type of weird experience for me that I googled “Ryan Carey + Leverage Research” to test a theory. Having learned that your daily routine used to involve talking and collaborating with Leverage researchers “regarding how we can be smarter, more emotionally stable, and less biased”, I now have a better understanding of why I found it hard to engage in a productive debate with you.

If you want to respond to the open questions Brian or KHorton have asked of you, I’ll stay out of those conversations to make it easier for you to speak your mind.

Thanks for everything you’ve done for the EA community! Good luck!

Comment by AnonymousEAForumAccount on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-21T23:10:35.812Z · EA · GW

This is a really insightful comment.

The dynamic you describe is a big part of why I think we should defer to people like Peter Singer even if he doesn’t work on cause prioritization full time. I assume (perhaps incorrectly) that he’s read stuff like Superintelligence, The Precipice, etc. (and probably discussed the ideas with the authors) and just doesn’t find their arguments as compelling as Ryan. 

Comment by AnonymousEAForumAccount on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-21T22:39:53.838Z · EA · GW

A: I didn't say we should defer only to longtermist experts, and I don't see how this could come from any good-faith interpretation of my comment. Singer and Gates should some weight, to the extent that they think about cause prio and issues with short and longtermism, I'd just want to see the literature.


You cited the views of the leaders forum as evidence that leaders are longtermist, and completely ignored me when I pointed out that the “attendees [were] disproportionately from organizations focused on global catastrophic risks or with a longtermist worldview.” I also think it’s unreasonable to simply declare that “Christiano, Macaskill, Greaves, Shulman, Bostrom, etc” are “the most accomplished experts” but require “literature” to prove that Singer and Gates have thought a sufficient amount about cause prioritization. 

I’m pretty sure in 20 years of running the Gates Foundation, Bill has thought a bit about cause prioritization, and talked to some pretty smart people about it. And he definitely cares about the long-term future, he just happens to prioritize climate change over AI. Personally, I trust his philanthropic and technical credentials enough to take notice when Gates says stuff like :

[EAs] like working on AI. Working on AI is fun. If they think what they’re doing is reducing the risk of AI, I haven’t seen that proof of that. They have a model. Some people want to go to Mars. Some people want to live forever. Philanthropy has got a lot of heterogeneity in it. If people bring their intelligence, some passion, overall, it tends to work out. There’s some dead ends, but every once in a while, we get the Green Revolution or new vaccines or models for how education can be done better. It’s not something where the philanthropists all homogenize what they’re doing.

Sounds to me like he's thought about this stuff.

I agree that some longtermists would favour shorttermist or mixed content. If they have good arguments, or if they're experts in content selection, then great! But I think authenticity is a strong default.

You’ve asked us to defer to a narrow set of experts, but (as I previously noted) you’ve provided no evidence that any of the experts you named would actually object to mixed content. You also haven’t acknowledged evidence that they’d prefer mixed content (e.g. Open Phil’s actual giving history or KHorton’s observation that “Will MacAskill [and] Ajeya Cotra [have] both spoken in favour of worldview diversification and moral uncertainty.”) I don’t see how that’s “authentic.”

In my ideal universe, the podcast would be called an "Introduction to prioritization", but also, online conversation would happen on a "priorities forum", and so on. 

I agree that naming would be preferable. But you didn’t propose that name in this thread, you argued that an “Intro to EA playlist”,, and the EA Handbook (i.e. 3 things with “EA” in the name) should have narrow longtermist focuses. If you want to create prioritization handbooks, forums, etc., why not just go create new things with the appropriate names instead of coopting and changing the existing EA brand? 

Comment by AnonymousEAForumAccount on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-21T15:27:38.900Z · EA · GW

I definitely think (1) is important. I think (2-3) should carry some weight, and agree the amount of weight should depend on the credibility of the people involved rather than raw popularity. But we’re clearly in disagreement about how deference to experts should work in practice.

There are two related questions I keep coming back to (which others have also raised), and I don’t think you’ve really addressed them yet.

A: Why should we defer only to longtermist experts? I don’t dispute the expertise of the people you listed. But what about the “thoughtful people” who still think neartermism warrants inclusion? Like the experts at Open Phil, which splits its giving roughly evenly between longtermist and neartermist causes? Or Peter Singer (a utilitarian philosophers like 2/3 of the people you named) who has said (here, at 5:22) : “I do think that the EA movement has moved too far and, arguably, there is now too much resources going into rather speculative long-termism.” Or Bill Gates? (“If you said there was a philanthropist 500 years ago that said, “I’m not gonna feed the poor, I’m gonna worry about existential risk,” I doubt their prediction would have made any difference in terms of what came later. You got to have a certain modesty. Even understanding what’s gonna go on in a 50-year time frame I would say is very, very difficult.”) 

I place negligible weight on the fact that “the EA leaders forum is very long-termist” because  (in CEA’s words): “in recent years we have invited attendees disproportionately from organizations focused on global catastrophic risks or with a longtermist worldview.” 

I agree there’s been a shift toward longtermism in EA, but I’m not convinced that’s because everyone was convinced by “the force of the arguments” like you were. At the same time people were making longtermist arguments, the views of a longtermist forum were represented as the views of EA leaders, ~90% of EA grant funding went to longtermist projects, CBG grants were assessed primarily on the number of people taking “priority” (read: longtermist) jobs, the landing page didn’t include global poverty and had animals near the bottom of an introductory list, EA Globals highlighted longtermist content, etc. Did the community become more longtermist because they found the arguments compelling, because the incentive structure shifted, or (most likely in my opinion) some combination of these factors? 


B: Many (I firmly believe most) knowledgeable longtermists would want to include animal welfare and global poverty in an “intro to EA” playlist (see Greg’s comment for example). Can you name specific experts who’d want to exclude this content (aside from the original curators of this list)? When people want to include this content and you object by arguing that a bunch of experts are longtermist, the implication is that generally speaking those longtermist experts wouldn’t want animal and poverty content in introductory material. I don’t think that’s the case, but feel free to cite specific evidence if I’m wrong.

Also: if you’re introducing people to “X” with a 10 part playlist of content highlighting longtermism that doesn’t include animals or poverty, what’s the harm in calling “X” longtermism rather than EA?

Comment by AnonymousEAForumAccount on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-19T22:26:40.033Z · EA · GW

It's frustrating that I need to explain the difference between the “argument that would cause us to donate to a charity for guide dogs” and the arguments being made for why introductory EA materials should include content on Global Health and Animal Welfare, but here goes…

People who argue for giving to guide dogs aren’t doing so because they’ve assessed their options logically and believe guide dogs offer the best combination of evidence and impact per dollar. They’re essentially arguing for prioritizing things other than maximizing utility (like helping our local communities, honoring a family member’s memory, etc.) And the people making these arguments are not connected to the EA community (they'd probably find it off-putting).

In contrast, the people objecting to non-representative content branded as an “intro to EA” (like this playlist or the EA Handbook 2.0) are people who agree with the EA premise of trying to use reason to do the most good. We’re using frameworks like ITN, we’re just plugging in different assumptions and therefore getting different answers out. We’ve heard longtermist arguments for why their assumptions are right. Many of us find those longtermist arguments convincing and/or identify as longtermists, just not to such an extreme degree that we want to exclude content like Global Health and Animal Welfare from intro materials (especially since part of the popularity of those causes is due to their perceived longterm benefits). We run EA groups and organizations, attend and present at EA Global, are active on the Forum, etc. The vast majority of the EA experts and leaders we know (and we know many) would look at you like you’re crazy if you told them intro to EA content shouldn’t include global health or animal welfare, so asking us to defer to expertise doesn’t really change anything. 

Regarding the narrow issue of “Crucial Considerations” being removed from, this change was made because it makes no sense to have an extremely technical piece as the second recommended reading for people new to EA. If you want to argue that point go ahead, but I don’t think you’re being fair by portraying it as some sort of descent into populism.

Comment by AnonymousEAForumAccount on AMA: JP Addison and Sam Deere on software engineering at CEA · 2021-03-30T14:26:41.977Z · EA · GW

I certainly wouldn't subject our random Googlers to eight weeks' worth of material! To clarify, by "this content" I mean "some of this content, probably a similar amount to the amount of content we now feature on", rather than "all ~80 articles".


Ah, thanks for clarifying :) The devil is always in the details, but "brief and approachable content" following the same rough structure as the fellowship sounds very promising. I look forward to seeing the new site!

Comment by AnonymousEAForumAccount on AMA: JP Addison and Sam Deere on software engineering at CEA · 2021-03-29T18:50:23.316Z · EA · GW

Thank you for making these changes Aaron, and for your openness to this discussion and feedback!

You’re correct, I was referring to the reading list on the homepage. The changes you made there, to the key ideas series, and to the resources page (especially when you complete the planned reordering) all seem like substantial improvements. I really appreciate that you've updated the site!

I took a quick look at the Fellowship content, and it generally looks like you’ve chosen good content and done a reasonable job of providing a balanced overview of EA (thanks for getting input from the perspectives you mentioned). Ironically, my main quibble with the content (and it’s note a huge one) is that it’s too EA-centric. For example, if I was trying to convince someone that pandemics are important I’d show them Bill Gates’ TED Talk on pandemics rather than an EA podcast as the former approach leverages Gates’ and TED’s credibility.

While I generally think the Fellowship content appears good (at least after a brief review), I still think it’d be a very big mistake to “adapt to refer to this content as our default introduction.” The Fellowship is for people who opt into participating in an 8 week program with an estimated 2-3 hours of preparation for each weekly session. is for people who google “effective altruism”. There’s an enormous difference between those two audiences, and the content they see should reflect that difference. 

As an example, the first piece of core content in the Fellowship is a 30 minute intro to EA video, whereas I’d imagine should try to communicate key ideas in just a few minutes and then quickly try to get people to e.g. sign up for the EA Newsletter. That said, we shouldn’t have to guess what content works best on the homepage, we should be able to figure it out experimentally through A/B testing.

Comment by AnonymousEAForumAccount on AMA: JP Addison and Sam Deere on software engineering at CEA · 2021-03-28T13:43:20.395Z · EA · GW

Thanks for this response Max!

1.  I’m torn. On one hand (as I mentioned to Aaron) I appreciate that CEA is making efforts to offer realistic estimates instead of overpromising or telling people what they want to hear. If CEA is going to prioritize the EA Wiki and would rather not outsource management of, I’m legitimately grateful that you’re just coming out and saying that. I may not agree with these prioritization decisions (I see it as continuing a problematic pattern of taking on new responsibilities before fulfilling existing ones), but at the end of the day those decisions are yours to make and not mine. 

On the other hand, I feel like substantial improvements could be made with negligible effort. For instance, I think you’d make enormous progress if you simply added the introductory article on Global Health and Development to the reading list on the homepage, replacing “Crucial Considerations and Wise Philanthropy”. 

Global Health is currently a glaring omission since it is the most popular cause in the EA community and it is highly accessible to an introductory audience. And I think nearly everyone (near-or-long-termist) would agree that “Crucial Considerations” (currently second on the reading list after a brief introduction to EA) is quite obviously not meant for an introductory audience. It assumes a working understanding of x-risk (in general and specific x-risks), has numerous slides with complex equations, and uses highly technical language that will be inscrutable to most people who have only read a brief intro to EA (e.g.  “we should oppose extra funding for nanotechnology even though superintelligence and ubiquitous surveillance might be very dangerous on their own, including posing existential risk, given certain background assumptions about the technological completion conjecture.”

You’ve written (in the same comment you quoted): “I think that CEA has a history of pushing longtermism in somewhat underhand ways… given this background of pushing longtermism, I think it’s reasonable to be skeptical of CEA’s approach on this sort of thing.” You don’t need to hire a contractor or prioritize an overhaul of the site to address my skepticism. But it would go a long way if Aaron were to spend a day looking for low hanging fruit like my suggested change, or even if you just took the tiny step of adding Global Health to the list of (mostly longtermist) causes on the homepage. I assume the omission of Global Health was an oversight. But now that it’s been called to your attention, if you still don’t think Global Health should be added to the homepage I doubt there’s anything you can say or do to resolve my skepticism. 


2.  Running is just one example of work that CEA undertakes on behalf of the broader community (EAG, groups work, and community health are other examples). Generally speaking, how (if at all) do you think CEA should be accountable to the broader community when conducting this work? To use an absurd example, if CEA announced that the theme for EAG 2022 is going to be “Factory farmed beef… it’s what’s for dinner”, what would you see as the ideal process for resolving the inevitable objections?

Now may not be the right time for you to explain how you think about this, and this comment thread almost certainly isn’t the right place. But I think it’s important for you to address these issues at some point in the not too distant future. And before you make up your mind, I hope you’ll gather input from as broad a cross section of the community as possible.

Comment by AnonymousEAForumAccount on AMA: JP Addison and Sam Deere on software engineering at CEA · 2021-03-25T19:21:36.871Z · EA · GW

FYI, I'm still seeing an error message, albeit a different one than earlier. Here's what I get now:

Your connection is not private

Attackers might be trying to steal your information from (for example, passwords, messages, or credit cards). Learn more


That said, I didn't mean to imply the site has historically had abnormal downtime, sorry for not making that clear.

Comment by AnonymousEAForumAccount on AMA: JP Addison and Sam Deere on software engineering at CEA · 2021-03-25T17:11:31.499Z · EA · GW
  1. Change up the introductory material a lot.

I’m glad there are some changes planned to the introductory materials and resources page. As you update this material, what reference class will you be using? Do you want to reflect the views of the EA community? Engaged EAs? CEA? EA “leaders”?

I’m also curious if/how that reference class will be communicated on the site, as I think that’s been a problem in the past. For the past few years (until the modest changes you made recently) the resources page has been virtually identical to the EA Handbook 2.0, which (for better or worse) “emphasized [CEA’s] longtermist view of cause prioritization, contained little information about why many EAs prioritize global health and animal advocacy, and focused on risks from AI to a much greater extent than any other cause.” If it was a problem that the handbook “ostensibly represented EA thinking as a whole, but actually represented the views of some of CEA’s staff”, I'd think that problem is magnified immensely when that content is on 

2. Change the "Get Involved" section… This project won't necessarily be owned by CEA

Would CEA ever consider temporarily or permanently transferring the broader ownership of to another person/organization? It seems like the site could easily be a full time job for one or more people. Beyond updating the content, someone could be A/B testing different types of content and sharing those lessons with the community, optimizing conversions, running marketing tests, doing SEO, publishing regular updates on traffic and engagement, etc. 

CEA hasn’t really prioritized over the last couple of years and doesn’t want to commit to prioritizing it going forward (and I commend you for trying to give realistic expectations about your future priorities). But it really feels like a missed opportunity that the landing page for people who google “effective altruism” has been deprioritized for so long. With so many EAs looking for jobs and/or volunteer opportunities and $1.8 million in the EA Infrastructure Fund (which is now considering active grantmaking), it seems like CEA might be able to delegate this work to someone who could make substantial progress (even if CEA wants to "use the Forum as a portal to a wider range of EA content/opportunities" in parallel.) 

(And ironically, is down at time of writing. Just submitted a ticket via the EA funds page…)

Comment by AnonymousEAForumAccount on Some quick notes on "effective altruism" · 2021-03-25T14:44:32.704Z · EA · GW

This is great! Can you summarize your findings across these tests?

Comment by AnonymousEAForumAccount on Responses and Testimonies on EA Growth · 2021-03-24T16:11:58.538Z · EA · GW

Thanks Jonas!

Comment by AnonymousEAForumAccount on Responses and Testimonies on EA Growth · 2021-03-24T14:16:46.963Z · EA · GW

For this to be the explanation presumably intra-EA conflict would not merely need to be driving people away, but driving people away at higher rates than it used to. It's not clear to me why this would be the case.

My mental model is that in the early years, a disproportionately large portion of the EA community consisted of the community’s founders and their friends (and friends of friends, etc.) This cohort is likely to be very tolerant of the early members’ idiosyncrasies- it’s even possible some of those friendships were built around those idiosyncrasies. As time went on, more people found EA through other means (reading DGB in a university class, hearing about EA on a podcast, etc.) This new cohort is much less likely to tolerate those early idiosyncrasies. (The Pareto Fellowship application process could be a good example). 

It's also worth noting that highly engaged EAs are quite close socially. It's possible that many of those 178 people might be thinking of the same people!

Good point about double counting as a possible issue. That means we shouldn’t try to infer the number of people driven away. However, we should still be able to say that bad experiences with other EAs are causing more engaged EAs to leave than other factors, since those other factors are also subject to double counting.

Comment by AnonymousEAForumAccount on Responses and Testimonies on EA Growth · 2021-03-23T21:39:29.947Z · EA · GW

Another factor that has slowed EA’s growth over the years: people are leaving EA because of bad experiences with other EAs. 

That’s not some pet theory of mine, that’s simply what EA’s reported in the 2019 EA survey. There were 178 respondents who reported knowing a highly engaged EA who left the community, and by far the most cited factor (37% of respondents) was “bad experiences with other EAs.” I think it’s safe to say these bad experiences are also likely driving away less engaged EAs who could have become more engaged.

One could argue that this factor is minor in the scheme of things, and maybe they’d be right. But this is another clear example where a) there’s something negatively impacting EA’s growth, b) the problem is obviously caused by EAs, and c) the problem didn’t even make the list of possible explanations. I think that supports my argument about EA’s blind spots.

Comment by AnonymousEAForumAccount on Responses and Testimonies on EA Growth · 2021-03-23T21:30:04.831Z · EA · GW

I largely agree with your categorizations, and how you classify the mistakes. But I agree with Max that I’d expect 1 and especially 2 to impact growth directly.

FWIW, I don’t think it was a mistake to make longtermism a greater priority than it had been (#3), but I do think mistakes were made in pushing this way too far (e.g. having AI/longtermist content dominate the EA Handbook 2.0 at the expense of other cause areas) and I’m concerned this is still going on (see for example the recent announcement that the EA Infrastructure Fund’s new managers are all longtermists.)

Comment by AnonymousEAForumAccount on Responses and Testimonies on EA Growth · 2021-03-23T20:02:19.166Z · EA · GW

Thanks AGB!

But it's true that in neither case would I expect the typical reader to come away with the impression that a mistake was made, which I think is your main point and a good one. This is tricky because I think there's significant disagreement about whether this was a mistake or a correct strategic call, and in some cases I think what is going on is that the writer thinks the call was correct (in spite of CEA now thinking otherwise), rather than simply refusing to acknowledge past errors.

I do think it was a mistake to deprioritize GWWC, though I agree this is open to interpretation. But I want to clarify that my main point is that the EA community seems to have strong and worrisome cultural biases toward self-congratulation and away from critical introspection.

Comment by AnonymousEAForumAccount on Responses and Testimonies on EA Growth · 2021-03-23T16:14:14.241Z · EA · GW

Thanks Max! It’s incredibly valuable for leaders like yourself to acknowledge the importance of identifying and learning from mistakes that have been made over the years.

Comment by AnonymousEAForumAccount on Responses and Testimonies on EA Growth · 2021-03-22T16:55:47.625Z · EA · GW

Thanks for raising this question about EA's growth, though I fully agree it would have been better to frame that question more like: “Given that we're pouring a substantial amount of money into EA community growth, why doesn't it show up in some of these metrics?" To that end, while I may refer to “growing” or “not growing” below for brevity I mean those terms relative to expectations rather than in an absolute sense. With that caveat out of the way… 

There’s a very telling commonality about almost all the possible explanations that have been offered so far. Aside from a fraction of one comment, none of the explanations in the OP or this followup post even entertain the possibility that any mistakes by people/organizations in the EA community inhibited growth. That seems worthy of a closer look. We expect an influx of new funding (ca. ~2016-7) to translate into growth (with some degree of lag), but only if it is deployed in effective strategies that are executed well. If we see funding but not growth, why not look at which strategies were funded and how well they were executed?

CEA is probably the most straightforward example to look at, as an organization that has run a lot of community building projects and that received much of Open Phil’s initial “EA focus area” funding. Let’s look only at projects from 2015-2019, as more recent projects might not be reflected in growth statistics yet. In rough chronological order, here are some projects where mistakes could have plausibly impacted EA’s growth trajectory.

Shouldn’t we consider the possibility that one or more of these issues contributed to EA’s underwhelming growth trajectory? The GWWC case seems pretty clear cut. CEA realized they had made a “mistake” (CEA’s word) by under-prioritizing GWWC, and spun the project off. Now that GWWC is getting proper attention, growth has very rapidly picked back up. If GWWC had been prioritized all along, wouldn’t we be seeing better growth metrics than we see now? To his credit, Aaron Gertler (who works at CEA) flagged GWWC’s growth rate as “solid evidence of weaknesses in community building strategy”, though nobody else engaged with this observation.

If we use Occam’s Razor to try and understand a lack of growth, isn’t “we deprioritized a major growth vehicle” a simpler explanation than “there was a shift in which rationalist blogs were popular”, “Google Trends data is wrong”, or “EA is innate, you can’t convert people”? And couldn’t you say the same thing about EA Grants/CBGs granting ~$4 million less than planned (~$2m granted vs. ~$6m planned) in 2018-19? Or the Pareto Fellowship putting “nearly 500” applicants (all people so eager to deepen their engagement with EA that they applied for an intensive fellowship) and “several hundred” semi-finalists through what one interviewee described as: “one of the strangest, most uncomfortable experiences I've had over several years of being involved in EA. I'm posting this from notes I took right after the call, so I am confident that I remember this accurately… It seemed like unscientific, crackpot psychology. It was the sort of thing you'd expect from a New Age group or Scientology… The experience left me feeling humiliated and manipulated.”

Let me pause here to say: There’s plenty of merit in some of the other explanations for lower than expected growth that people have raised, like a pivot in emphasis from donations to careers. My list above is also clearly cherry-picked to illustrate a point. CEA has obviously also done a lot of things to help the EA community. And CEA has recognized and taken steps to address many of the problems I mentioned, like spinning off GWWC and EA Funds, changing management teams and strategies, shutting down problematic projects like EA Ventures, the Pareto Fellowship, and EA Grants, etc. And it should be obvious that any organization will make mistakes and that plenty of people beyond CEA have made mistakes since 2015 (I know I have!) Indeed, it could be useful to invite people and organizations to submit mistakes they’ve made over the years so that we can collectively learn from them. Even if these mistakes were reasonable decisions at the time, hindsight is a wonderful teacher if you use it.

But here’s the critical point: even if you don’t think “mistakes were made” is the main explanation for why growth has been slow, it should scare the hell out of you that only one person thought to mention it in passing because clearly some mistakes were made. Recognizing these problems is how you begin to fix them. CEA recognized that GWWC wasn’t getting enough attention and spun it off to remedy that. Lo and behold, in the new Executive Director’s “first three months, the rate of new pledges tripled.” 

This refusal to look inward is a blind spot, and a particularly troubling one for a community that prides itself on unbiased thinking, openness to uncomfortable ideas, and strong epistemics. Here’s a (crucial?) consideration: if GWWC had been reasonably staffed and compounding at a higher growth rate for the last five years, if the Pareto Fellowship put ~500 extremely enthusiastic EAs through a normal rather than alienating interview process in 2016, if EA Grants/CBGs had granted an additional ~$4 million into the community as planned (tripling the amount actually granted), if the time and money invested in projects that didn’t get off the ground had gone toward sustainable projects paying ongoing dividends, and if other people and organizations hadn’t made countless other mistakes over the years, what would our community growth metrics look like today? Are we confident all the necessary lessons have been learned from these mistakes, and that we’re not at risk of repeating those mistakes? 

Comment by AnonymousEAForumAccount on AMA: Holden Karnofsky @ EA Global: Reconnect · 2021-03-16T15:31:29.832Z · EA · GW

In addition to funding AI work, Open Phil’s longtermist grantmaking includes sizeable grants toward areas like biosecurity and climate change/engineering, while other major longtermist funders (such as the Long Term Future Fund, BERI, and the Survival and Flourishing Fund) have overwhelmingly supported AI with their grantmaking. As an example, I estimate that “for every dollar Open Phil has spent on biosecurity, it’s spent ~$1.50 on AI… but for every dollar LTFF has spent on biosecurity, it’s spent ~$19 on AI.” 

Do you agree this distinction exists, and if so, are you concerned by it? Are there longtermist funding opportunities outside of AI that you are particularly excited about? 

Comment by AnonymousEAForumAccount on Our plans for hosting an EA wiki on the Forum · 2021-03-09T20:08:59.078Z · EA · GW


Comment by AnonymousEAForumAccount on Our plans for hosting an EA wiki on the Forum · 2021-03-09T18:07:26.482Z · EA · GW

I think CEA would have to tread carefully to support this work without violating Wikipedia's rules about paid editing. I may think about this more in future months (right now, I'm juggling a lot of projects). If you have suggestions for what CEA could do in this area, I'd be happy to hear them.

The paid editing restrictions are a bigger issue than I’d originally realized. But I do think it would be helpful for an experienced Wikipedia editor like Pablo to write up some brief advice on how volunteers can add EA content to Wikipedia while adhering to all their rules. Sounds like Pablo has some other experiences to share as well. That plus a list of EA content that would be good to get on Wikipedia (which I believe already exists) would probably be enough to make some good progress.

Comment by AnonymousEAForumAccount on Our plans for hosting an EA wiki on the Forum · 2021-03-09T18:06:01.189Z · EA · GW

Thank you Aaron for this detailed engagement! 

Sounds like we’re agreed that Wikipedia editing would be beneficial, and that working on Wikipedia vs. a dedicated wiki isn’t necessarily in direct conflict.

I mostly set my own priorities at CEA; even if I came to believe that doing a lot of dedicated wiki work wasn't a good use of my time, and we decided to stop paying for work from Pablo or others like him, I can't imagine not wanting to spend some of my time coordinating other people to do this work…

The reason I haven't spent much time thinking about the "volunteer-only" version of the wiki is that Pablo has a grant to work on this project for many months to come, and the project is also one of my highest current priorities at CEA. If it starts to seem like one or both of those things will stop being true in the foreseeable future, I expect to put a lot more time into preparing for the "volunteer-only" era.

As I wrote to Pablo, my biggest concern about this project is that CEA won’t sustain a commitment to it. Pablo has a grant “for many months to come”, but what happens after that? How likely do you think it is that CEA/EA Funds will pay for Pablo or someone else to work full time on content creation for years to come? If you think that’s unlikely, then you need a realistic “volunteer-only” plan that accounts for the necessary staff, incentives, etc. to implement (and if there's not a realistic version of the "volunteer-only" plan, that's a good thing to learn ahead of time. ) In the same vein, I’d suggest giving serious thought as to the likelihood that an EA Wiki will remain “one of your highest priorities” (and/or a top priority for one of your colleagues) over a timeframe of years not months.

Honestly, a significant part of the reason I’m concerned is because I feel like accurately estimating the cost of projects (and especially the costs to keep them up and running after an initial push, including the opportunity costs of not being able to pursue new projects) has been a historical weakness of CEA’s and likely the root cause of CEA’s historical “underlying problem” of “running too many projects.”  

Comment by AnonymousEAForumAccount on Our plans for hosting an EA wiki on the Forum · 2021-03-08T23:40:17.518Z · EA · GW

Thanks Pablo… I appreciate your thoughtful engagement with my comments, and all the hard work you’ve put into this project.

How sensitive are your worries to scenarios in which the main paid content-writer fails to stay motivated, relative to scenarios in which the project fails because of insufficient volunteer effort? I'm inclined to believe that as long as there is someone whose full-time job is to write content for the Wiki (whether it's me or someone else), in combination with all the additional work that Aaron and the technical team are devoting to it, enough progress will probably occur to sustain growth over time and attract volunteer contributors.

I’m not at all concerned that “the main paid content-writer fails to stay motivated” since that can easily be solved by finding a suitable replacement. I worry a bit about “insufficient volunteer effort”, but mostly see that as a symptom of my main concern: whether organizational commitment can be sustained. 

If CEA has a good understanding of what it will cost to create and maintain the necessary content, technical platform, and volunteer structure and commits to (indefinitely) paying those costs, I’d feel pretty optimistic about the project. I’ve expressed some concerns that CEA is underestimating those costs, but would like to let Aaron respond to those concerns as I may be underestimating the paid staff time CEA is planning or otherwise missing something.

Comment by AnonymousEAForumAccount on Our plans for hosting an EA wiki on the Forum · 2021-03-07T17:30:42.041Z · EA · GW

Glad to hear the EA Concept content was leveraged! 

I've consolidate my responses to a few comments including the rest of yours here.

Comment by AnonymousEAForumAccount on Our plans for hosting an EA wiki on the Forum · 2021-03-07T17:29:44.501Z · EA · GW

Thanks for raising these issues Max! I've consolidate my responses to a few comments including yours here.

Comment by AnonymousEAForumAccount on Our plans for hosting an EA wiki on the Forum · 2021-03-07T17:29:07.066Z · EA · GW

Thanks for this thoughtful and informative response Pablo! I've consolidate my responses to a few comments including yours here.

Comment by AnonymousEAForumAccount on Our plans for hosting an EA wiki on the Forum · 2021-03-07T17:26:59.724Z · EA · GW

Thanks Aaron! Consolidating my replies to a few different comments here.

I think the notability concerns are real, and greater than I’d originally thought. Pablo has lots of experience as a Wikipedia editor and I don’t, so I’ll defer to him. And it does seem quite telling that Pablo originally tried the Wikipedia approach for a “few months and felt pretty disappointed with the outcome.” 

That said, I’m still pretty sure there’s a lot of low-hanging fruit in terms of content that’s notable, EA relevant, and not on Wikipedia. “Longtermism” is a good example. I also suspect (though more experienced Wikipedia people should weigh in) that if we made good progress on the low hanging fruit, terms like “hinge of history” and “patient philanthropy” would be perceived as considerably more notable than they are now. (FWIW, I also think that having tags for things like “hinge of history” is a perfectly reasonable Minimum Viable Product alternative to a dedicated Wiki.)

To clarify my position, I do think a dedicated EA Wiki would be extremely valuable. But I think there’s a significant chance that a dedicated EA Wiki won’t be completed and/or maintained. That’s what’s happened to multiple previous efforts to build an EA Wiki, so that’s my baseline assumption unless I see a plan that’s obviously thought very long and hard about sustainability. I certainly don’t get that impression about this plan given Aaron’s comment about what it would take to keep the site running:

I don't have a great estimate for how much volunteer time we'd need to keep things running, but I'd expect the bare minimum to be less time than Pablo and I are putting in, such that further volunteer contributions are a nice addition rather than an existential necessity. If we were volunteer-only... maybe 15-20 hours per month? That's enough time for a few dozen minor edits plus a couple of substantive new articles.

I’m very confident that estimate is well short of the time required to upkeep a dedicated Wiki. While I don’t have experience as a Wikipedia editor, I have quite a bit of experience with a previous employer’s internal Wiki. It was immensely valuable. It was also immensely difficult to develop and maintain. There were countless emails and meetings on the theme “we need to clean up the Wiki (and this time we really mean it!)”, and in my experience it takes a combination of that and making the Wiki part of peoples job responsibilities/evaluations to maintain something useful. It’s amazing how quickly information gets stale, and it gets harder to keep things updated the as you get more entries in your Wiki. If you build an EA Wiki that gets to the level of having a page for “Russia” (to use Aaron’s example), you’re going to need a lot more volunteer (and/or paid staff) time than a few days of someone’s time a month.

I also get the sense that you’re underestimating the financial and opportunity costs of having CEA and LW developers responsible for maintaining/adding functionality, based on this comment:

CEA and LessWrong both have developers available to add features to the Forum's Wiki; we can be more flexible in adapting to things that are useful for our readers than Wikipedia is (on the other hand, Wikipedia has a much bigger team, and may add features we can't replicate at some point, so this feels like a toss-up).

Wikipedia already has more features (including features that would be valuable for an EA Wiki like translation) and a much bigger team (that doesn’t have competing priorities and doesn’t cost CEA anything), so it seems to me like CEA/LW will be constantly playing catchup. And any time they spend adding new features or even just maintaining a dedicated Wiki is time they won’t be able to work on other valuable projects.

My overarching concern is that you’re seriously underestimating the ongoing costs of this project, which will basically continue in perpetuity and increase over time. This has been the issue that sank previous attempts at an EA Wiki, and honestly it’s a pretty big red flag that you “don't have a great estimate for how much volunteer time we'd need to keep things running.” 

I’d urge you to do some more research into what the costs will look like over time (i.e. talk to people involved in previous EA Wiki attempts and people who have lots of experience with dedicated Wikis) and to think about “all in” costs as much as possible (for example, you’ll want to include the ongoing cost of finding, training, and overseeing volunteers and account for volunteer turnover). I would really love to see a dedicated EA Wiki get built and maintained, I just think that if you undertake this project you need to have a realistic picture of the ongoing costs you’ll be committing to.

Comment by AnonymousEAForumAccount on Our plans for hosting an EA wiki on the Forum · 2021-03-06T00:15:33.544Z · EA · GW

My guess is that’s ~90% feature, 10% bug. I think most of the value of an EA Wiki would come from content that’s both EA relevant and notable by Wikipedia’s standards, and that the EA Wiki content most likely to go stale would be that which didn’t meet external notability standards.

More importantly, if you want to get EA ideas into the mainstream, at some point you're going to have to convince people outside EA that those ideas are notable.  Wikipedia seems as good a place to do so as any, since it has established procedures for assessing new content and the payoff for success is getting EA ideas included in the world's most accessible repository of knowledge.

Comment by AnonymousEAForumAccount on Apply to EA Funds now · 2021-03-05T23:00:16.066Z · EA · GW

Thanks for the update Harri. I'd suggest putting this info on the main CBG page so applicants have an up to date picture.

Comment by AnonymousEAForumAccount on Apply to EA Funds now · 2021-03-05T22:58:40.907Z · EA · GW

I didn’t downvote your comment, though I am disappointed you won’t be considering applications this cycle. I hope that if CEA does choose to restrict CBG applications going forward (which seems to be under consideration per Harri) that the EAIF will fill the gap. FWIW I’d like to see EAIF funding this space even if CEA does open up applications, as I’d value diversifying funder perspectives more than any comparative advantage CEA might have.

Comment by AnonymousEAForumAccount on Our plans for hosting an EA wiki on the Forum · 2021-03-05T18:02:51.464Z · EA · GW
  1. Why can’t the existing content from EA Concepts be used to seed the new Wiki?
  2. Are you planning to use prizes or other incentives after the “festival” is over? If not, how do you plan to handle the ongoing (and presumably increasing) maintenance burden? Do you have a (very rough) estimate for how much volunteer time will be required once the Wiki is up and running?
  3. Why is a dedicated EA Wiki better than adding EA content/perspectives to Wikipedia? I think using the main Wikipedia would have numerous advantages: easier to make incremental progress, seen by more people, more contextualized, many more languages, forces EA to interact with opposing ideas, larger volunteer pool, etc. 
Comment by AnonymousEAForumAccount on A ranked list of all EA-relevant (audio)books I've read · 2021-02-22T20:47:36.851Z · EA · GW

Thanks Michael! And I should note that FWIW I think my observation is more of a commentary on the "EA canon" than your list per se.

Comment by AnonymousEAForumAccount on Apply to EA Funds now · 2021-02-19T14:33:47.249Z · EA · GW

Thanks for investigating Jonas!

Comment by AnonymousEAForumAccount on Apply to EA Funds now · 2021-02-18T17:03:56.291Z · EA · GW

Please note that if your project relates to community building for local, regional, or national groups, you should apply to CEA’s Community Building Grants (CBG) programme.


CEA’s Community Building Grants page currently says applications are closed. When the closing was announced (August 2020), applications were expected to reopen in January. Do you know when applications are now expected to reopen? If it will be a long time and/or if CEA will only fund a narrow set of groups through CBG (which sounds like it may be the case), would the fund managers reconsider accepting applications from groups that don't have access to CBG?

Comment by AnonymousEAForumAccount on A ranked list of all EA-relevant (audio)books I've read · 2021-02-18T16:35:20.135Z · EA · GW

Not only are all the authors male and WEIRD, they're also all white presenting. 

Comment by AnonymousEAForumAccount on CEA's Plans for 2021 · 2020-12-15T15:54:26.686Z · EA · GW

Thanks for the explanations Max!

Comment by AnonymousEAForumAccount on CEA's Plans for 2021 · 2020-12-11T20:12:50.484Z · EA · GW

This is super helpful- thank you! I feel like I’ve got a much better understanding of your goals now. It really cleared things up to learn which of your multiple goals you're prioritizing most, as well as the precise targets you have for them (since you have a specific recruitment goal it might be worth editing the OP to add that).

I have two followup questions about the recruitment goal.

  1. How did you set your target of recruiting 125 people? That’s much lower than I would have guessed based on other recruitment efforts (GWWC has run a two-month pledge drive that produced three times as many pledges, plus a bunch of people signing up for Try Giving). And with $2.5 million budgeted for recruitment, the implied $20,000 per recruit seems quite high. I feel like I might be misunderstanding what you mean about "following a cohort of students who attended an introductory fellowship, our introductory event, or the EA Student Summit in 2020" (discussed in the second bullet point).
  2. The recruitment section discusses a “plan to put additional effort into providing mentorship and opportunities for 1:1 connections for group members from demographic groups underrepresented in EA.” Do you have any specific goals for these efforts? For example, I could imagine having a goal that the cohort you recruit be more diverse than the current EA population along certain dimensions. If you don’t have specific goals, what do you plan to look at to know whether your efforts are having the desired effect?
Comment by AnonymousEAForumAccount on CEA's Plans for 2021 · 2020-12-11T14:39:39.434Z · EA · GW

CEA’s Values document (thank you for sharing this) emphasizes the importance of “specific, focused goals.” It’s helpful to see the specific goals that specific teams have, but what do you see as the most important specific goals for CEA as an organization in 2021? I feel like this writeup gives me a sense of your plans for the year, but not the well-defined criteria you currently expect to use at the end of 2021 to judge whether the year was a success.

Comment by AnonymousEAForumAccount on Long-Term Future Fund: Ask Us Anything! · 2020-12-11T14:10:46.979Z · EA · GW

Thanks Jonas, glad to hear there are some related improvements in the works  For whatever it’s worth, here’s an example of messaging that I think accurately captures what the fund has done, what it’s likely to do in the near term, and what it would ideally like to do:

The Long-Term Future Fund aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks or promote the adoption of longtermist thinking. While many grants so far have prioritized projects addressing risks posed by artificial intelligence (and the grantmakers expect to continue this at least in the short term), the Fund is open to funding, and welcomes applications from, a broader range of activities related to the long-term future.

Comment by AnonymousEAForumAccount on Long-Term Future Fund: Ask Us Anything! · 2020-12-10T15:41:41.607Z · EA · GW

Which of these two sentences, both from the fund page,  do you think describes the fund more accurately?

  1. The Long-Term Future Fund aims to positively influence the long-term trajectory of civilization by making grants that address global catastrophic risks, especially potential risks from advanced artificial intelligence and pandemics. (First sentence of fund page.)
  2. Grants so far have prioritized projects addressing risks posed by artificial intelligence, and the grantmakers expect to continue this at least in the short term. (Located 1500 words into fund page.)

I'd say 2 is clearly more accurate, and I think the feedback you've received about donors being surprised at how many AI grants were made suggests I'm not alone.

Comment by AnonymousEAForumAccount on Long-Term Future Fund: Ask Us Anything! · 2020-12-10T15:35:53.836Z · EA · GW

Good point! I'd say ideally the messaging should describe both forward and backward looking donations, and if they differ, why. I don't think this needs to be particularly lengthy, a few sentences could do it. 

Comment by AnonymousEAForumAccount on Long-Term Future Fund: Ask Us Anything! · 2020-12-10T15:30:12.999Z · EA · GW

I personally think that's quite explicit about the focus of the LTFF, and am not sure how to improve it further. Perhaps you think we shouldn't mention pandemics in that sentence? Perhaps you think "especially" is not strong enough?

I don’t think it’s appropriate to discuss pandemics in that first sentence. You’re saying the fund makes grants that “especially” address pandemics, and that doesn’t seem accurate. I looked at your spreadsheet (thank you!) and tried to do a quick classification. As best I can tell, AI has gotten over half the money the LTFF has granted, ~19x the amount granted to pandemics (5 grants for $114,000). Forecasting projects have received 2.5x as much money as pandemics, and rationality training has received >4x as much money. So historically, pandemics aren’t even that high among non-AI priorities. 

If pandemics will be on equal footing with AI going forward, then that first sentence would be okay. But if that’s the plan, why is the management team skillset so heavily tilted toward AI?

An important reason why we don't make more grants to prevent pandemics is that we only get few applications in that area. The page serves a dual purpose: it informs both applicants and donors. Emphasizing pandemics less could be good for donor transparency, but might further reduce the number of biorisk-related applications we receive. As Adam mentions here, he’s equally excited about AI safety and biosecurity at the margins, and I personally mostly agree with him on this.

I’m glad there’s interest in funding more biosecurity work going forward. I’m pretty skeptical that relying on applications is an effective way to source biosecurity proposals though, since relatively few EAs work in that area (at least compared to AI) and big biosecurity funding opportunities (like Open Phil grantees Johns Hopkins Center for Health Security and Blue Ribbon Study Panel on Biodefense) probably aren’t going to be applying for LTFF grants. 

Regarding the page’s dual purpose, I’d say informing donors is much more important than informing applicants: it’s a bad look to misinform people who are investing money based on your information. 

We prioritize AI roughly for the reasons that have been elaborated on at length by others in the EA community (see, e.g., Open Phil's report), plus additional considerations regarding our comparative advantage. I agree it would be good to provide more transparency regarding high-level prioritization decisions; I personally would find it a good idea if each Fund communicated its overall strategy for the next two years, though this takes a lot of time. I hope we will have the resources to do this sometime soon.

There’s been plenty of discussion (including that Open Phil report) on why AI is a priority, but there’s been very little explicit discussion of why AI should be prioritized relative to other causes like biosecurity. 

Open Phil prioritizes both AI and biosecurity. For every dollar Open Phil has spent on biosecurity, it’s spent ~$1.50 on AI. If the LTFF had a similar proportion, I’d say the fund page’s messaging would be fine. But for every dollar LTFF has spent on biosecurity, it’s spent ~$19 on AI. That degree of concentration warrants an explicit explanation, and shouldn’t be obscured by the fund’s messaging.

Comment by AnonymousEAForumAccount on Long-Term Future Fund: Ask Us Anything! · 2020-12-09T13:25:46.660Z · EA · GW

Historically I think the LTFF's biggest issue has been insufficiently clear messaging, especially for new donors. For example, we received feedback from numerous donors in our recent survey that they were disappointed we weren't funding interventions on climate change. We've received similar feedback from donors surprised by the number of AI-related grants we make. Regardless of whether or not the fund should change the balance of cause areas we fund, it's important that donors have clear expectations regarding how their money will be used.

We've edited the fund page to make our focus areas more explicit 


I agree unclear messaging has been a big problem for the LTFF, and I’m glad to see the EA Funds team being responsive to feedback around this. However, the updated messaging on the fund page still looks extremely unclear and I’m surprised you think it will clear up the misunderstandings donors have.

It would probably clear up most of the confusion if donors saw the clear articulation of the LTFF’s historical and forward looking priorities that is already on the fund page (emphasis added): 

“While the Long-Term Future Fund is open to funding organizations that seek to reduce any type of global catastrophic risk — including risks from extreme climate change, nuclear war, and pandemics — grants so far have prioritized projects addressing risks posed by artificial intelligence, and the grantmakers expect to continue this at least in the short term.” 

The problem is that this text is buried in the 6th subsection of the 6th section of the page. So people have to read through ~1500 words, the equivalent of three single spaced typed pages, to get an accurate description of how the fund is managed. This information should be in the first paragraph (and I believe that was the case at one point).

Compounding this problem, aside from that one sentence the fund page (even after it has been edited for clarity) makes it sound like AI and pandemics are prioritized similarly, and not that far above other LT cause areas. I believe the LTFF has only made a few grants related to pandemics, and would guess that AI has received at least 10 times as much funding. (Aside: it’s frustrating that there’s not an easy way to see all grants categorized in a spreadsheet so that I could pull the actual numbers without going through each grant report and hand entering and classifying each grant.)

In addition to clearly communicating that the fund prioritizes AI, I would like to see the fund page (and other communications) explain why that’s the case. What are the main arguments informing the decision? Did the fund managers decide this? Did whoever selected the fund managers (almost all of who have AI backgrounds) decide this? Under what conditions would the LTFF team expect this prioritization to change? The LTFF has done a fantastic job providing transparency into the rationale behind specific grants, and I hope going forward there will be similar transparency around higher level prioritization decisions.

Comment by AnonymousEAForumAccount on Announcing Effective Altruism Ventures · 2020-07-06T16:38:24.387Z · EA · GW

I’m glad someone is asking what happened with EA Ventures (EAV): it’s an important question that hasn’t yet received a satisfactory answer.

When EAV was discontinued, numerous people asked for a post-mortem of some type (e.g. here, here, and here) to help capture learning opportunities. But nothing formal was ever published. The “Celebrating Failed Projects” panel eventually shared a few lessons, but someone would need to watch an almost hour-long video (much of which does not relate to EAV) to see them all. And the lessons seem trivial (“if you’re doing a project which gives money to people, you need to have that money in your bank account first”) about as often as they seem insightful (“Finding excellent entrepreneurs is much, much harder than I thought it was going to be”).

If a proper post-mortem with community input had been conducted, I’m confident many other lessons would emerge*, including one prominent one: “Don’t over-promise and under-deliver.” This has obvious relevance to a grantmaking project that launched before it had lined up funds to grant (as far as I know EAV only made two grants- the one Jamie mentioned and a $19k grant to EA Policy Analytics). But it also relates to more mundane aspects of EAV: my understanding is that applicants were routinely given overly optimistic expectations about how quickly the process would move.

The missed opportunity to learn these lessons went on to impact other projects. As just one example, EA Grants was described as “the spiritual successor to EA Ventures”. And it did reflect the narrow lesson from that project, as it lined up money before soliciting grant applications. However, the big lesson wasn’t learned and EA Grants consistently overpromised and under-delivered throughout its entire history. EA Grants announced plans to distribute millions of dollars more money than it actually granted, repeatedly announced unrealistic and unmet plans to accept open applications, explicitly described educational grants as eligible when they were not, granted money to a very narrow set of projects, and (despite its public portrayal as a project capable of distributing millions of dollars annually) did not maintain an “appropriate operational infrastructure and processes [resulting] in some grant payments taking longer than expected [which in some cases] contributed to difficult financial or career situations for recipients.”

EAV and EA Grants have both been shuttered, and there’s a new management team in place at CEA. So if I had a sense that the new management had internalized the lessons from these projects, I wouldn’t bring any of this up. But CEA’s recently updated “Mistakes” page doesn’t mention over-promising/under-delivering, which makes me worry that’s not the case. That’s especially troubling because the community has repeatedly highlighted this issue: when CEA synthesized community feedback it had received, the top problem reported was “respondents mentioned several times that CEA ‘overpromised and under delivered’”. The most upvoted comment on that post? It was Peter Hurford describing that specific dynamic as “my key frustration with CEA over the past many years.”

To be fair, the “Mistakes” page discusses problems that are related to over-promising/under-delivering, such as acknowledging that “running too many projects from 2016-present” has been an “underlying problem.” But it’s possible to run too many projects without overpromising, and it’s possible to be narrowly focused on one or a few projects while still overpromising and under-delivering. “Running too many projects” explains why EA Grants had little or no dedicated staff in early 2018; it doesn’t explain why CEA repeatedly committed to scaling the project during that period despite not having the staff in place to execute. I agree CEA has had a problem of running too many projects, but I see the consistent over-promising/under-delivering dynamic as far more problematic. I hope that CEA will increasingly recognize and incorporate this recurring feedback from the EA community. And I hope that going forward, CEA will prioritize thorough post-mortems (that include stakeholder input) on completed projects, so that the entire community can learn as much as possible from them.

* Simple example: with the benefit of hindsight, it seems likely that EAV significantly overinvested in developing a complex evaluation model before the project launched, and that EAV’s staff may have had an inflated sense of their own expertise. From the EAV website at its launch:

“We merge expert judgment with statistical models of project success. We used our expertise and the expertise of our advisers to determine a set of variables that is likely to be positively correlated with project success. We then utilize a multi-criteria decision analysis framework which provides context-sensitive weightings to several predictive variables. Our framework adjusts the weighting of variables to fit the context of the projects and adjusts the importance of feedback from different evaluators to fit their expertise.”
Comment by AnonymousEAForumAccount on Racial Demographics at Longtermist Organizations · 2020-05-15T19:20:19.017Z · EA · GW

As another resource on effective D&I practices, HBR just published a new piece on “Diversity and Inclusion Efforts that Really Work.” It summarizes a detailed report on this topic, which “offers concrete, research-based evidence about strategies that are effective for reducing discrimination and bias and increasing diversity within workplace organizations [and] is intended to provide practical strategies for managers, human resources professionals, and employees who are interested in making their workplaces more inclusive and equitable.”

Comment by AnonymousEAForumAccount on 2019 Ethnic Diversity Community Survey · 2020-05-13T21:16:57.178Z · EA · GW

Very interesting to see this data- thanks so much for collecting it and writing it up! I hope future versions of the EA Survey will adopt some of your questions, to get a broader perspective.

Comment by AnonymousEAForumAccount on Racial Demographics at Longtermist Organizations · 2020-05-07T21:28:00.127Z · EA · GW

Thanks Ben! That’s an interesting reference point. I don’t think there are any perfect reference points, so it’s helpful to see a variety of them.

By way of comparison, 1.8% of my sample was black (.7%) or Hispanic (1.1%).

Comment by AnonymousEAForumAccount on Racial Demographics at Longtermist Organizations · 2020-05-07T21:26:32.553Z · EA · GW

I don’t think placing no value on diversity is a PR risk simply because it’s a view held by an ideological minority. Few people, either in the general population or the EA community, think mental health is the top global priority. But I don’t think EA incurs any PR risk from community members who prioritize this cause. And I also believe there are numerous ways EA could add different academic backgrounds, worldviews, etc. that wouldn’t entail any material PR risk.

I want to be very explicit that I don’t think EA should seek to suppress ideas simply because they are an extreme view and/or carry PR risks (which is not to say those risks don’t exist, or that EAs should pretend they don’t exist). That’s one of the reasons why I haven’t been downvoting any comments in this thread even if I strongly disagree with them: I think it’s valuable for people to be able to express a wide range of views without discouragement.