Rationality as an EA Cause Area

post by casebash · 2018-11-13T14:48:25.011Z · score: 22 (26 votes) · EA · GW · 27 comments

The Rationality and Effective Altruism communities have experienced wildly different trajectories in recent years. While EA has meetups at most major universities, is backed by a multi-billion dollar foundation, has proliferated organisations to the point of confusion and now even has its own media outlet; the rationality community had to struggle just to manage to resurrect Less Wrong. LW finally seems to be on a positive trajectory, but the rationality community is still much less than what it could have been. Its ideas have barely penetrated academia, there isn't a rationality conference and there isn't even an organisation dedicated to growing the movement.

A large part of this reason is that the kinds of people who would actually do something and run these kinds of projects have been drawn into either EA or ai-risk. While these communities benefit from this talent, I suspect that this effect has occurred to the point of killing the geese that lays the golden egg (this is analogous to concerns about immigration to the Bay Area hollowing out local communities [LW · GW]).

I find this concerning for the following reasons:

The failure of LW to fulfil its potential has made these gains much less than what they could have been. I suspect that as per the Pareto Principle, a small organisation promoting rationality might be far better than no organisation trying to promote it (CFAR focuses on individuals, not broader groups within society or society as a whole). At the very least, a small scale experiment seems worthwhile. Even though there is a high chance that the intervention would have no discernible effect, as per Owen's Prospecting for Gold [? · GW] talk, the impacts in the tail could be extremely large, so the gamble seems worthwhile. I don't know what to suggest that such and the organisation could do, but I imagine that there are a number of difference approaches they could experiment with, at least some of which might plausibly be effective.

I do see a few potential risks with this project:

Nonetheless, funding wouldn't have to be committed until it could be confirmed that suitable parties were interested and the potential gains seem like they could justify the opportunity cost. In terms of the second point, I suspect that far more good actors will be created than bad actors, such that the net effect is positive.

This post was written with the support of the EA Hotel


Comments sorted by top scores.

comment by Ben Pace · 2018-11-13T23:17:07.890Z · score: 16 (8 votes) · EA(p) · GW(p)
there isn't even an organisation dedicated to growing the movement

Things that are not movements:

  • Academic physics
  • Successful startups
  • The rationality community

They all need to grow to some extent, but they have a particular goal that is not generic 'growth'. Most 'movements' are primarily looking for something like political power, and I think that's a pretty bad goal to optimise for. It's the perennial offer to all communities that scale: "try to grab political power". I'm quite happy to continue being for something other than that.

Regarding the size of the rationality and EA communities right now, this doesn't really seem to me like a key metric? A more important variable is whether you have infrastructure that sustains quality at the scale the community is at.

  • The standard YC advice says the best companies stay small long. An example of Paul Graham saying it is here, search "I may be an extremist, but I think hiring people is the worst thing a company can do."
  • There are many startups that have 500 million dollars and 100 employees more than your startup, but don't actually have a product-market fit, and are going to crash next year. Whereas you might work for 5-10 years then have a product that can scale to several billions of dollars of value. Again, scaling right now will seems shiny and appealing, but something you often should fight against.
  • Regarding growth in the rationality community, I think a scientific field is a useful analogue. And if I told you I'd started some new field and in the first 20 years I'd gotten a research group in every university, is this necessarily good? Am I machine learning? Am I bioethics? I bet all the fields that hit the worst of the replication crisis have experienced fast growth at some point in the past 50 years. Regardless of intentions, the infrastructure matters, and it's not hard to simply make the world worse.

Other thoughts: I agree that the rationality project has resulted in a number of top people working on AI x-risk, effective altruism, and related projects, and that the ideas produced a lot of the epistemic bedrock for the community to be successful at noticing important and new ideas. I am also sad there hasn't been better internal infrastructure built in the past few years. As Oli Habryka said downthread (amongst some other important points), the org I work at that built the new LessWrong (and AI Alignment Forum and EA Forum, which is evidence for your 'rationalists work on AI and EA claim' ;) ) is primarily trying to build community infrastructure.

Meta thoughts: I really liked the OP, it concisely brought up a relevant proposal and placed it clearly in the EA frame (pareto principle, heavy tailed outcomes, etc).

comment by casebash · 2018-11-14T13:17:58.564Z · score: 1 (1 votes) · EA(p) · GW(p)

The size of the rationality community hasn't been limited so much by quality concerns, as by lack of effort expended in growth.

comment by Ben Pace · 2018-11-14T14:23:09.618Z · score: 4 (3 votes) · EA(p) · GW(p)

I think it is easy to grow too early, and I think that many of the naive ways of putting effort into growth would be net negative compared to the counterfactual (somewhat analagous to a company that quickly makes 1 million when it might've made 1 billion).

Focusing on actually making more progress with the existing people, by building more tools for them to coordinate and collaborate, seems to me the current marginal best use of resources for the community.

(I agree that effort should be spent improving the community, I just think 'size' isn't the right dimension to improve.)

Added: I suppose I should link back to my own post on the costs of coordinating at scale [LW · GW].

comment by Stefan_Schubert · 2018-11-13T19:03:26.838Z · score: 14 (7 votes) · EA(p) · GW(p)

How would you define a rationality project? I am working on psychological impediments to effective giving and how they can be overcome with Lucius Caviola at Oxford. I guess that can also be seen as a rationality project, though I am not quite sure how you would define that notion.

Previously, I ran several other projects which could be seen as rationality projects - I started a network for evidence-based policy [EA · GW], created a political bias test [EA · GW], and did work on argument-checking [LW · GW].

I am generally interested in doing more work in this space. In particular, I would be interested in doing work that relates to academic psychology and philosophy, which is rigorous, and which has a reasonably clear path to impact.

I think one sort of diffuse "project" that one can work on on the side of one’s main project is work to maintain and improve the EA community’s epistemics, e.g., by arguing well and in good faith oneself, and by rewarding others who do that as well. I do agree that good epistemics are vital for the EA community.

comment by Aaron Gertler (aarongertler) · 2018-11-13T20:03:13.623Z · score: 12 (6 votes) · EA(p) · GW(p)

Stefan linked to a Forum piece about a tool built by Clearer Thinking, but I wanted to use this post to link that organization specifically. They demonstrate one model for what a "rationality advocacy" organization could do. Julia Galef's Update Project is another, very different model (working closely with a few groups of high-impact people, rather than building tools for a public audience).

Given that the Update Project is semi-sponsored by the Open Philanthropy Project, and that the Open Philanthropy Project has also made grants to rationality-aligned orgs like CFAR, SPARC, and even the Center for Election Science (which I'd classify as an organization working to improve institutional decision-making, if the institution is "democracy"), it seems like EA already has quite an investment in this area.

casebash (and other commenters): What kind of rationality organization would you like to see funded which either does not exist or exists but has little to no EA funding? Alternatively, what would a world look like that was slightly more rational in the ways you think are most important?

comment by casebash · 2018-11-14T13:16:09.050Z · score: 0 (2 votes) · EA(p) · GW(p)

I was referring specifically to growing the rationality community as a cause area.

comment by Pablo_Stafforini · 2018-11-14T16:55:58.718Z · score: 8 (7 votes) · EA(p) · GW(p)

Then I would suggest changing the title of the post. 'Rationality as a cause area' can mean many things besides 'growing the rationality community'.

Furthermore, some of the considerations you list in support of the claim that rationality is a promising cause area do not clearly support, and may even undermine, the claim that one should grow the rationality community. Your remarks about epistemic standards, in particular, suggest that one should approach growth very carefully, and that one may want to deprioritize growth in favour of other forms of community building.

comment by casebash · 2018-11-14T23:43:27.555Z · score: 3 (3 votes) · EA(p) · GW(p)

Replace "growing" the rationality community with "developing" the rationality community. But that's a good point. It is worthwhile keeping in mind that the two are seperate. I imagine one of the first tasks of such a group would be figuring out what this actually means.

comment by Mati_Roy · 2018-11-13T21:05:38.542Z · score: 12 (5 votes) · EA(p) · GW(p)

I also feel similarly. Thanks for writing this.

Points I would add:

-This organisation could focus on supporting local LessWrong groups (which CFAR isn't doing).

-This organisation could focus on biases that make people shift in a better direction rather than going in the same direction faster. For example, reducing the scope insensitivity bias seems like a robust way to make people more altruistic, whereas improving people's ability to make Trigger-Action-Plans might simply accelerate the economy as a whole (which could be bad if you think that crunches are more likely than bangs and shrieks, as per Bostrom's terminology).

-The organisation might want to focus on theories with more evidence (ie. be less experimental than CFAR) to avoid spreading false memes that could be difficult to correct, as well as being careful about idea inoculations.

comment by marcus_gabler · 2019-02-23T23:00:29.048Z · score: 1 (1 votes) · EA(p) · GW(p)

I think the whole thing has to go way beyond biases and the like.

You have to know how to pick up folks and make them stick.

All that LW stuff, as true as it may be, is perfect to actually chase folks away.

Even the word "rationalism" (just like any other term ending in 'ism') has to be largely avoided, even if you are only aiming at innovators, let alone early adopters.

This marketing strategy is probably more critical than the content itself...

comment by Habryka · 2018-11-13T21:56:28.996Z · score: 6 (4 votes) · EA(p) · GW(p)
Its ideas have barely penetrated academia, there isn't a rationality conference and there isn't even an organisation dedicated to growing the movement.

I think you can think of the new LessWrong organization as doing roughly that (though I don't think the top priority should be growth, but more about building infrastructure to make sure the community can productively grow and be productive). We are currently focusing on the online community, but we also did some thing to improve the meetup system, are starting to run more in-person events, and might run a conference in the next year (right now we have the Secular Solstice, which I actually think complements existing conferences like EA Global quite well, and does a good job at a lot of the things you would want a conference to achieve).

I agree that it's sad that there hasn't been an org focusing on this for the last few years.

On the note of whether the ideas of the rationality community have failed to penetrated academia, I think that's mostly false. I think the ideas have probably penetrated academia more than the basics of Effective Altruism have. In terms of web-traffic and general intellectual influence among the intellectual elite, the sequences as well as HPMOR and Scott Alexander's writing have attracted significant attention and readership, and mostly continue doing so (as a Fermi, I expect about 10x more people have read the LW sequences/Rationality:A-Z than have read Doing Good Better, and about 100x have read HPMOR). Obviously, I think we can do better, and do think there is a lot of value in distilling/developing core ideas in rationality more and helping them penetrate into academia and other intellectual hubs.

I do think that in terms of community-building, there has been a bunch of neglect, though I think overall in terms of active meetups and local communities, the rationality community is still pretty healthy. I do agree that on some dimensions there has been a decline, and would be excited about more people trying to put more resources into building the rationality community, and would be excited about collaborating and coordinating with them.

comment by Habryka · 2018-11-13T21:58:57.337Z · score: 4 (3 votes) · EA(p) · GW(p)

To give a bit of background in terms of funding, the new LessWrong org was initially funded by an EA-Grant, and is currently being funded by a grant from BERI, Nick Beckstead and Matt Wage. In general EA funders have been supportive for the project and I am glad for their support.

comment by casebash · 2018-11-14T13:15:05.633Z · score: 1 (1 votes) · EA(p) · GW(p)

"In terms of web-traffic and general intellectual influence among the intellectual elite, the sequences as well as HPMOR and Scott Alexander's writing have attracted significant attention and readership, and mostly continue doing so" - I was talking more about academia than the blogosphere. Here, only AI safety has had reasonable penetration. EA has had several heavyweights in philosophy, plus FHI for a while and also now GPI.

comment by Habryka · 2018-11-14T19:03:49.187Z · score: 5 (4 votes) · EA(p) · GW(p)

Whether you count FHI as rationality or EA is pretty ambigious. I think memetically FHI is closer to the transhumanist community, and a lot of the ideas that FHI publishes about are ideas that were discussed on SL4 and LessWrong before FHI published them in a more proper format.

comment by Ikaxas · 2018-11-14T13:38:48.049Z · score: 5 (4 votes) · EA(p) · GW(p)

Scott Alexander has actually gotten academic citations, e.g. in Paul Bloom's book Against Empathy (sadly I don't remember which article of his Bloom cites), and I get the impression a fair few academics read him.

comment by Ben Pace · 2018-11-14T14:19:06.557Z · score: 2 (2 votes) · EA(p) · GW(p)

Bostrom has also cited him in his papers.

comment by G Gordon Worley III (gworley3) · 2018-11-13T19:18:33.981Z · score: 5 (3 votes) · EA(p) · GW(p)

Maybe an alternative way to look at this is, why is rationality not more a part of EA community building? Rationality as a project likely can't stand on its own because it's not trying to do anything; it's just a bunch of like-minded folks with a similar interest in improving their ability to apply epistemology. The cases where the rationality "project" has done well, like building up resources to address AI risk, were more like cases where the project needed rationality for an instrumental purpose and then built up LW and CFAR in the service of that project. Perhaps EA can more strongly include rationality in that role as part of what it considers essential for training/recruiting in EA and building a strong community that is able to do the things it wants to do. This wouldn't really mean rationality is a cause area, more an aspect of effective EA community building.

comment by marcus_gabler · 2019-02-23T16:54:49.565Z · score: 3 (3 votes) · EA(p) · GW(p)
the rationality community is still much less than what it could have been

I couldn't agree more.

I believe that rationality (incl. emotional intelligence etc.) is the key to a better version of mankind.

I expressed this in several LW posts / comments, eg.:



I am looking for people to assist me in creating an online step by step guide to

  • rationality
  • self reflection / empathy
  • emotional intelligence
  • brain debugging
  • reason vs. emotion
  • (low) self esteem

Such guide should start from zero should be somewhat easier to access than LW.

More details in above LW posts.

I have many ideas / concepts around such project and want to discuss them in some kind of workgroup or forum, whatever works best.

I will start an own threat about this here later, depending on feedback on this comment.

Thanks, Marcus.

comment by casebash · 2019-02-23T19:06:38.085Z · score: 1 (1 votes) · EA(p) · GW(p)

I would be surprised to see much activity on a comment on a three month old thread. If you want to pursue this, I'd suggest writing a new post. Good luck, I'd love to see someone pursuing this project!

comment by marcus_gabler · 2019-02-23T22:50:41.685Z · score: 1 (1 votes) · EA(p) · GW(p)

You can bet I will be pursuing this vision.

I only heard about LW / EA etc a few months ago.

I was VERY surprised no one has done it before. I basically only asked around to

(Now that I got a taste of the LW community I am a little less surprised, though... :-) )

The closest NGO I could find so far is InIn, but they still have a different focus.

And even this forum here was rather hidden...


Your response is the first/only ENCOURAGING one I got so far.

If you happen to remember anyone who was even only writing about this somewhere, let me know.

comment by casebash · 2019-02-24T09:10:18.961Z · score: 1 (1 votes) · EA(p) · GW(p)

Yeah, InIn was the main attempt at this. Gleb was able to get a large number of articles published in news sources, but at the cost of quality. And some people felt that this would make people perceive rationality negatively, as well as drawing in people from the wrong demographic. I think he was improving over time, but perhaps too slowly?

PS. Have you seen this? https://www.clearerthinking.org

comment by marcus_gabler · 2019-02-24T09:25:27.453Z · score: 1 (1 votes) · EA(p) · GW(p)

Haha! Bulls Eye!

It actually was around october that i found clearerthinking.org by googling reason vs. emotion. I friended Spencer Greenberg on FB and asked him if there was some movement/community around this.

He advised me to check out RATIONALISM, LW and EA.

Just check my above posts if you please, I hope i finde the time to post a new version of RATIONALITY FOR THE MASSES here soon...

What is your background? (ie. why are you not like those LW folks?)

I mean: I am so reliefed to get some positive feedbacks here, while LW only gave me ignorance and disqualification...

comment by Khorton · 2018-11-13T23:39:58.492Z · score: 3 (3 votes) · EA(p) · GW(p)

I think it's really easy to make a case that funding the rationality community is a good thing to do. It's much harder to make a case that it's better to fund the rationality community than competing organizations. I'm sympathetic to your concerns, but I'm surprised that the reaction to this post is so much less critical than other "new cause area" posts. What have I missed?

comment by Aaron Gertler (aarongertler) · 2018-11-14T02:48:17.245Z · score: 3 (3 votes) · EA(p) · GW(p)

I don't think you've missed anything in particular. But there is a difference between reaction to a post being "not critical" and being "enthusiastic".

My read on the comments so far is that people are generally "not critical" because the post makes few specific claims that could be proven wrong, but that people aren't very "enthusiastic" about the post itself; instead, people are using the comments to make their own suggestions on the original topic.

That said, it seems perfectly reasonable if the main result of a post is to kick off a discussion between people who have more information/more concrete suggestions!

comment by Aidan O'Gara · 2019-02-23T23:45:36.488Z · score: 2 (2 votes) · EA(p) · GW(p)

I agree that LW has been a big part of keeping EA epistemically strong, but I think most of that is selection rather than education. It's not that reading LW makes you much more clearer-thinking or focused on truth, it's that only people who are that way to begin with decide to read LW, and they then get channeled to EA.

If that's true, it doesn't necessarily discredit rationality as an EA cause area, it just changes the mechanism and the focus: maybe the goal shouldn't be making everybody LW-rational, it should be finding the people that already fit the mold, hopefully teaching them some LW-rationality, and then channeling them to EA.

comment by marcus_gabler · 2019-02-23T23:35:01.682Z · score: 1 (1 votes) · EA(p) · GW(p)
perhaps this would make it easier for someone unaligned to develop an AGI which turns out poorly

Not the way I have figured out.

Again you seem to be too focussed on LW.

Of course, because there hardly is anything else out there.

But I started unbiasing in 1983 when most of those folks weren't even born yet.

I took me 30 years, but living rationality is a totally different thing than reading and writing about it!

Jeeez, can't wait to make this post...

comment by marcus_gabler · 2019-02-23T23:29:45.023Z · score: 1 (1 votes) · EA(p) · GW(p)
This project wouldn't succeed without buy-in from the LW community.

I don't think such LW will even be directly involved or of much support.

I want to buy-in / talk-in these guys:


I guess you heared of Simon Sinek, Denzel Wahsington... :-)

This video has 14M views and it is neither well produced nor really streamlined or dedicated!

But it dwarfs LW or anyting around it.