How many EAs failed in high risk, high reward projects?

post by mariushobbhahn · 2022-04-26T12:31:26.047Z · EA · GW · 15 comments

This is a question post.

The ideas of high risk, high reward projects, value in the tails, etc. are quite common EA positions now. People are usually reminded that they have a low probability of success and that they should expect to fail most of the time. However, most people I know/have heard of who started ambitious EA projects are doing quite well. Examples would be SBF, Anthropic, Alvea, and many more. 

My question, therefore, is: Is the risk of failure lower than we expected, or do I just not know the failures? Do I just know the selection of people who succeeded? Is it too early to tell if a project truly succeeded? If so, what are concrete examples of EAs or EA orgs not meeting high expectations despite trying really hard? Is it possible that we just underestimate how successful someone with an EA mindset and the right support can be when they try really hard?


answer by rgb · 2022-04-26T14:07:19.045Z · EA(p) · GW(p)

Some past example that come to mind. Kudos to all of the people mentioned for trying ambitious things, and writing up the retrospectives:

  1. Not strictly speaking "EA", but an early effort from folks in the rationality community started [LW · GW] an evidence-based medicine organization called MetaMed

Zvi Mowshowitz's post-mortem:

Sarah Constantin's post-mortem:

  1. Michael Plant has a post-mortem of his mental health app, Hippo [EA · GW]

  2. Looking at around, I also found this list [EA(p) · GW(p)]

Some other posts are the Good Technology Projects' postmortem [EA · GW], a postmortem of a mental health app [EA · GW] by Michael Plant, organisations discuss their learnings in retrospectives like Fish Welfare Initiative [EA · GW] or in posts announcing decisions to shut down like Students for High Impact Charities [EA · GW]. In the Rationalist community, there was the Arbital Postmortem [LW · GW]. You can see more examples on the Forum postmortems and retrospectives [? · GW] tag, and examples from the LessWrong community in their analogous [? · GW] tag.

answer by sergia · 2022-04-26T18:15:59.161Z · EA(p) · GW(p)

I have failed to do any meaningful work on recommender systems alignment. We launched an association, YouTube acknowledged the problem with disinformation when we talked to them privately (about COVID, for example, coming from Russia, for example), but said they will not do anything, with or without us. We worked alone, I was the single developer. I burned out to the point of being angry and alienating people around me (I understand what Timnit Gebru has went through, because Russia (my home country) is an aggressor country, and there is a war in Tigray as well, which is related to Ethiopia, her home country). I have sent many angry/confusing emails that made perfect sense for me at the time... I went through homelessness and unemployment after internships at CHAI Berkeley and Google and a degree from a prestigious European university. I felt really bad for not being able to explain the importance of the problem and stop Putin before it was too late... Our colleagues' papers on the topic were silenced by their employers. Now I'm slowly recovering and feel I want to write about all that, some sort of a guide / personal experience on aligning real systems / organizations, and that real change comes really, really hard.

comment by Charles He · 2022-04-26T21:02:16.832Z · EA(p) · GW(p)

Thank you for sharing, this seems like an incredibly important and valuable effort and story. 

Another issue is unemployment and homelessness. 

This outcome doesn't seem acceptable for people with the motivations, efforts and experiences described in your account.

comment by mariushobbhahn · 2022-04-26T18:28:39.448Z · EA(p) · GW(p)

Thanks for sharing. 
I think writing up some of these experiences might be really really valuable, both for your own closure and for others to learn.  I can understand, though, that this is a very tough ask in your current position. 

answer by Justis · 2022-04-26T14:45:26.806Z · EA(p) · GW(p)

I've failed a few times. My social instincts tried to get me not to post this comment, in case it makes it more likely that I fail again, and failing hurts. I suspect there's really strong survivorship bias here.

answer by Jonas Vollmer · 2022-05-14T14:05:59.468Z · EA(p) · GW(p)

I think some of the worst failures are mediocre projects that go sort-of okay and therefore continue to eat up talent for a much longer time than needed; cases where ambitious projects fail to "fail fast". It takes a lot of judgment ability and self-honesty to tell that it's a failure relative to what one could have worked on otherwise.

One example is Raising for Effective Giving, a poker fundraising project that I helped found and run. It showed a lot of promise in terms of $ raised per $ spent over the years it was operating, and actually raised $25m for EA charities. But it looks a lot less high-impact once you draw comparisons to GWWC and Longview, or once you account for the small market size of the poker industry, lack of scalability, the expected future funding inflows into EA, and compensation from top Earning To Give opportunities. $25 million is really not much compared to the billions others raised through billionaire fundraising and entrepreneurship.

I personally failed to admit to myself that the project was showing mediocre but not amazing results, and only my successor (Stefan) then discontinued the project, which in hindsight seems like the correct judgment call.

answer by Randomized, Controlled (RandomizeControlled) · 2022-05-01T18:46:16.191Z · EA(p) · GW(p)

In 2017 I quit my job and spent a significant amount of time self studying ML, roughly following a curriculum that Dario Amodei laid out in an 80k podcast. I ran this plan past a few different people, including in an 80k career advising session, but after a year, I didn't get a job offer from any of the AI Safety orgs I'd applied to (Ought, OpenAI, maybe a couple of others) and was quite burned out and demotivated. I didn't even feel up to trying to interview for an ML focused job. Instead I went back to web development (although it was with a startup that did suggest I'd be able to do some ML work eventually, but that job ultimately wasn't a great fit, and I moved on to my current role... as a senior web dev.) 

I think there are a bunch of lessons I learned from this exercise, but overall I consider it one of my failures. 

answer by atlasunshrugged · 2022-04-26T13:12:19.302Z · EA(p) · GW(p)

I can say that I failed at what I would consider a high risk, high reward project. I was a member of a charity entrepreneurship cohort and worked on an a nonprofit idea focused on advocacy for a pigouvian tax but unfortunately couldn't really get things off the ground for a few reasons. That said, I still highly recommend trying something ambitious. That failure taught me a lot and got me more into the policy realm which helped pave the way for my current work doing policy advisory in Congress which I think is relatively high impact. 

answer by ChanaMessinger · 2022-04-26T14:03:31.197Z · EA(p) · GW(p)

And presumably, if no one has failed, then people aren't trying things that are ambitious enough. Kat woods I think can speak to some incubations from Charity Entrepreneurship that didn't take off.

comment by KarolinaSarek · 2022-04-26T18:26:45.131Z · EA(p) · GW(p)

CE has been incubating around 5 charities per year (with plans to scale in the future), so far the success rate is as follow:

  • 2/5 estimated to reach or exceed the cost-effectiveness of the strongest charities in their fields 
  •  2/5 make progress, but remain small-scale or have an unclear cost-effectiveness
  •  1/5 shut down in their first 24 months without having a significant impact

I spoke about it briefly in this post [EA · GW] and would love to find the time to elaborate more. 


comment by Vilfredo's Ghost (Bluefalcon) · 2022-05-15T21:10:14.458Z · EA(p) · GW(p)

How many orgs has Charity Entrepreneurship incubated and what's the success rate? 


Comments sorted by top scores.

comment by James Ozden (JamesOz) · 2022-04-26T13:33:47.588Z · EA(p) · GW(p)

I think it would be great to have some directory of attempted but failed projects. Often I've thought "Oh I think X is a cool idea, but I bet someone more qualified has already tried it, and if it doesn't exist publicly then it must have failed" but I don't think this is often true (also see this shortform [EA(p) · GW(p)] about the failure of the efficient market hypothesis for EA projects). Having a list of attempted but shut down (for whatever reason) projects might encourage people to start more projects, as we can really see how little of the idea space has been explored in practice.


There's a few helpful write-ups (e.g. shutting down the longtermist incubator [EA · GW]) but in addition to detailed post-mortems, I would be keen to see a low-effort directory (AirTable or even Google Sheets?) of attempted projects, who tried, contact details (with permission), why it stopped, etc. If people are interested in this, I can make some preliminary spreadsheet that we can start populating, but other recommendations are of course welcome.

Replies from: Bluefalcon
comment by Vilfredo's Ghost (Bluefalcon) · 2022-05-15T21:12:33.040Z · EA(p) · GW(p)

I would love to see this!

comment by Guy Raveh · 2022-04-26T20:04:51.806Z · EA(p) · GW(p)

Beyond asking about projects in a vague, general sense, it could also be interesting to compare the probabilities of success grantmakers in EA assign to their grantees' projects, to the fraction of them that actually succeed.