Rationality, EA and being a movement

post by casebash · 2019-06-22T05:22:42.623Z · score: 30 (21 votes) · EA · GW · 11 comments

This post is a quick write-up of a discussion that I recently had with two members of the rationality community. For reasons of simplicity I'll present them as holding a single viewpoint that is a merger of both their arguments. All parties seemed to be in agreement about the long-term future being an overwhelming consideration, so apologies in advance to anyone with a different opinion. This is cross-posted to Less Wrong under a different title here.

In a recent discussion, I noted that the rationality community didn't have an organisation like CEA engaging in movement building and suggested this might at least partially why EA seemed to be much more successful than the rationality community. While the rationality community has founded the MIRI and CFAR, I pointed out that there were now so many EA-aligned organisations it's impossible to keep track. EA runs conferences where hundreds of people attend, with more on the waitlist, while LW doesn't even have a conference in it's hometown. EA has groups at the most prominent universities, while LW has almost none. Further, EA now has it's own university department at Oxford and the support of OpenPhil, a multi-billion dollar organisation. Admittedly, Scott Alexander grew out of the rationality community, but EA has 80,000 hours. I also noted that EA had created a large number of people who wanted to become AI safety researchers; indeed at some EA conferences it felt like half the people there were interested in pursuing that path.

Based on this comparison, EA seems to have been far more successful. However, the other two suggested that appearances could be misleading and that it therefore wasn't so obvious that rationality should be a movement at all. In particular, they argued that most of the progress made so far in terms of AI safety didn't come from anything "mass-movement-y".

For example, they claimed:

Part of their argument was that quality is more important than quantity for research problems like safe AI. In particular, they asked whether a small team of the most elite researchers was more likely to succeed in revolutionising science or building a nuclear bomb than a much larger group of science enthusiasts.

My (partially articulated) position was that it was too early to expect too much. I argued that even though most EAs interested in AI were just enthusiasts, some percentage of this very large number of EAs would go on to become to be successful researchers. Further, I argued that we should expect this impact to be significantly positive unless there was a good reason to believe that a large proportion of EAs would act in strongly net-negative ways.

The counterargument given was that I had underestimated the difficulty of being able to usefully contribute to AI safety research and that the percentage who could usefully contribute would be much smaller than I anticipated. If this were the case, then engaging in more targeted outreach would be more useful than building up a mass movement.

I argued that more EAs had a chance of becoming highly skilled researchers than they thought. I said that this was not just because EAs tended to be reasonably intelligent; but also because they tended to be much better than average at engaging in good-faith discussion, be more exposed to content around strategy/prioritisation and also benefitted from network effects.

The first part of their response was to argue that by being a movement EA had ended up compromising on their commitment to truth, as follows:

i) EA's focus on having an impact entails growing the movement which entails protecting the reputation of EA and attempting to gain social status

ii) This causes EA to prioritise building relationships with high-status people, such as offering them major speaking slots at EA conferences, even when they aren't particularly rigorous thinkers.

iii) It also causes EA to want to dissociate from low-status people who produce ideas worth paying attention to. In particular, they argued that this had a chilling effect on EA and caused people to speak in a way that was much more guarded.

iv) By acquiring resources and status EA had drawn the attention of people who were interested in these resources, instead of the mission of EA. These people would damage the epistemic norms by attempting to shift the outcomes of truth-finding processes towards outcomes that would benefit them.

They then argued that despite the reasons I pointed out for believing that a significant number of EAs could be successful AI safety researchers, that most were lacking a crucial component which was a deep commitment to attempting to fix the issue as opposed to merely seeming like they are attempting to fix the issue. They believed that EA wasn't the right kind of environment for developing people like this and that without this attribute most work people engaged in would end up being essentially pointless.

Originally I listed another point here, but I've removed it since it wasn't relevant to this particular debate, but instead a second simultaneous debate about whether CEA was an effective organisation. I believe that the discussion of this topic ended here. I hope that I have represented the position of the people I was talking to fairly and I apologise in advance if I've made any mistakes.

11 comments

Comments sorted by top scores.

comment by aarongertler · 2019-07-11T02:43:15.041Z · score: 23 (7 votes) · EA · GW

I work for CEA, but these views are my own -- though they are, naturally, informed by my work experience.

----

First, and most important: Thank you for taking the time to write this up. It's not easy to summarize conversations like this, especially when they touch on controversial topics, but it's good to have this kind of thing out in public (even anonymized).

----

I found the concrete point about Open Phil research hires to be interesting, though the claimed numbers for CFAR seem higher than I'd expect, and I strongly expect that some of the most recent research hires came to Open Phil through the EA movement:

  • Open Phil recruited for these roles by directly contacting many people (I'd estimate well over a hundred, perhaps 300-400) using a variety of EA networks. For example, I received an email with the following statement: "I don't know you personally, but from your technical experience and your experience as an EA student group founder and leader, I wonder if you might be a fit for an RA position at Open Philanthropy."
  • Luke Muehlhauser’s writeup of the hiring round noted that there were a lot of very strong applicants, including multiple candidates who weren’t hired but might excel in a research role in the future. I can’t guarantee that many of the strong applicants applied because of their EA involvement, but it seems likely.
  • While I wasn't hired as an RA, I was a finalist for the role. Bastian Stern, one of the new researchers mentioned in this post, founded a chapter of Giving What We Can in college, and new researcher Jacob Trefethen was also a member of that chapter. If there hadn't been an EA movement for them to join, would they have heard about the role? Several other Open Phil researchers (whose work includes the long-term future) also have backgrounds in EA community-building.

I'll be curious to see whether, if Open Phil makes another grant to CFAR, they will note CFAR's usefulness as a recruiting pipeline (they didn't in January 2018, but this was before their major 2018 hiring round happened).

Also, regarding claims about 80,000 Hours specifically:

  • Getting good ops hires is still very important, and I don’t think it makes sense to downplay that.
  • Even assuming that none of the research hires were coached by 80K (I assume it’s true, but I don’t have independent knowledge of that):
    • We don’t know how many of the very close candidates came through 80,000 Hours…
    • ...or how many actual hires were helped by 80K’s other resources…
    • ...or how many researchers at other organizations received career coaching.
  • Open Phil’s enormous follow-on grant to 80K in early 2019 seems to indicate their continued belief that 80K’s work is valuable in at least some of the ways Open Phil cares about.

----

As for the statements about "compromising on a commitment to truth"... there aren't enough examples or detailed arguments to say much.

I've attended a CFAR workshop, a mini-workshop, and a reunion, and I've also run operations for two separate CFAR workshops (over a span of four years, alongside people from multiple "eras" of CFAR/rationality). I've also spent nearly a year working at CEA, before which I founded two EA groups and worked loosely with various direct and meta organizations in the movement.

Some beliefs I've come to have, as a result of this experience (corresponding to each point):

1. "Protecting reputation" and "gaining social status" are not limited to EA or rationality. Both movements care about this to varying degrees -- sometimes too much (in my view), and sometimes not enough. Sometimes, it is good to have a good reputation and high status, because these things both make your work easier and signify actual virtues of your movement/organization.

2. I've met some of the most rigorous thinkers I've ever known in the rationality movement -- and in the EA movement, including EA-aligned people who aren't involved with the rationality side very much or at all. On the other hand, I've seen bad arguments and intellectual confusion pop up in both movements from time to time (usually quashed after a while). On the whole, I've been impressed by the rigor of the people who run various major EA orgs, and I don't think that the less-rigorous people who speak at conferences have much of an influence over what the major orgs do. (I'd be really interested to hear counterarguments to this, of course!)

3. There are certainly people from whom various EA orgs have wanted to dissociate (sometimes successfully, sometimes not). My impression is that high-profile dissociation generally happens for good reasons (the highest-profile case I can think of is Gleb Tsipursky, who had some interesting ideas but on the whole exemplified what the rationalists quoted in your post were afraid of -- and was publicly criticized in exacting detail [EA · GW]).

I'd love to hear specific examples of "low-status" people whose ideas have been ignored to the detriment of EA, but no one comes to mind; Forum posts attacking mainstream EA orgs are some of the most popular on the entire site, and typically produce lots of discussion/heat (though perhaps less light).

I've heard from many people who are reluctant to voice their views in public around EA topics -- but as often as not, these are high-profile members of the community, or at least people whose ideas aren't very controversial.

They aren't reluctant to speak because they don’t have status — it’s often the opposite, because having status gives you something to lose, and being popular and widely-read often means getting more criticism over even minor points than an unknown person would. I’ve heard similar complaints about LessWrong from both well-known and “unknown” writers; many responses in EA/rationalist spaces take a lot of time to address and aren’t especially helpful. (This isn’t unique to us, of course — it’s a symptom of the internet — but it’s not something that necessarily indicates the suppression of unpopular ideas.)

That said, I am an employee of CEA, so people with controversial views may not want to speak to me at all -- but I can't comment on what I haven't heard.

4. Again, I'd be happy to hear specific cases, but otherwise it's hard to figure out which people are "interested in EA's resources, instead of the mission", or which "truth-finding processes" have been corrupted. I don't agree with every grant EA orgs have ever made, but on the whole, I don't see evidence of systemic epistemic damage.

----

The same difficulties apply to much of the rest of the conversation -- there's not enough content to allow for a thorough counterargument. Part of the difficulty is that the question "who is doing the best AI safety research?" is controversial, not especially objective, and tinged by one's perspective on the best "direction" for safety research (some directions are more associated with the rationality community than others). I can point to people in the EA community whose longtermist work has been impressive to me, but I'm not an AI expert, so my opinion means very little here.

As a final thought: I wonder what the most prominent thinkers/public faces of the rationality movement would think about the claims here? My impression from working in both movements is that there’s a lot of mutual respect between the people most involved in each one, but it’s possible that respect for EA’s leaders wouldn’t extend to respect for its growth strategy/overall epistemics.

comment by RomeoStevens · 2019-07-12T03:48:30.726Z · score: 19 (6 votes) · EA · GW

It sounds like one crux might be what counts as rigorous. I find the 'be specific' feedback to be a dodge. What is the counter party expected to do in a case like this? Point out people they think are either low status or not rigorous enough?

The damage, IMO, comes from EA sucking up a bunch of intelligent contrarian people and then having them put their effort behind status quo projects. I guess I have more sympathy for the systemic change criticisms than I used to.

comment by aarongertler · 2019-07-12T08:39:24.955Z · score: 4 (2 votes) · EA · GW

I didn't intend it as a dodge, though I understand why this information is difficult to provide. I just think that talking about problems in a case where one party is anonymous may be inherently difficult when examples can't easily come into play.

I could try harder to come up with my own examples for the claims, but that seems like an odd way to handle discussion; it allows almost any criticism to be levied in hopes that the interlocutor will find some fitting anecdote. (Again, this isn't the fault of the critics; it's just a difficult feature of the situation.)

What are some EA projects you consider "status quo", and how is following the status quo relevant to the worthiness of the projects? (Maybe your concern comes from the idea that projects which could be handled by non-contrarians are instead taking up time/energy that could be spent on something more creative/novel?)

comment by RomeoStevens · 2019-07-12T17:48:11.069Z · score: 12 (4 votes) · EA · GW

Yes, that's the concern. Asking me what projects I consider status quo is the exact same move as before. Being status quo is low status, so the conversation seems unlikely to evolve in a fruitful direction if we take that tack. I think institutions tend to slide towards attractors where the surrounding discourse norms are 'reasonable and defensible' from within a certain frame while undermining criticisms of the frame in ways that make people who point it out seem like they are being unreasonable. This is how larger, older foundations calcify and stop getting things done, as the natural tendency of an org is to insulate itself from the sharp changes that being in close feedback with the world necessitates.

comment by casebash · 2019-07-11T14:53:08.991Z · score: 4 (2 votes) · EA · GW

Sorry, I can't respond to this in detail, because the conversation was a while back. Further, I don't have independent confirmation on any of the factual claims.

I could PM you one name they mentioned for point three, but out of respect for their privacy I don't want to post this publicly. Regarding point four, they mentioned article as a description of the dynamic they were worried about.

In terms of resources being directed to something that is not the mission, I can't remember what was said by these particular people, but I can list the complaints I've heard in general: circling, felon voting rights, the dispute over meat at EAG, copies of HPMoR. Since this is quite a wide spread of topics, this probably doesn't help at all.

comment by aarongertler · 2019-07-12T01:58:04.612Z · score: 2 (1 votes) · EA · GW

Not a problem -- I posted the reply long after the post went up, so I wouldn't expect you to recall too many details. No need to send a PM, though I would love to read the article for point four (your link is currently broken). Thanks for coming back to reply!

comment by casebash · 2019-07-12T02:24:27.320Z · score: 2 (1 votes) · EA · GW

Here's the link: https://meaningness.com/geeks-mops-sociopaths

comment by meerpirat · 2019-06-22T15:45:44.080Z · score: 5 (6 votes) · EA · GW

Thanks for the summary! I don't know if that came up during your discussion, but I would have found concrete examples useful for judging the arguments.

"ii) This causes EA to prioritise building relationships with high-status people, such as offering them major speaking slots at EA conferences, even when they aren't particularly rigorous thinker"

I'd hope that bad arguments from high-status people will be pointed out and the discussion moves forward (e.g. Steven Pinker strawmanning worries about x-risks).

"iii) It also causes EA to want to dissociate from low-status people who produce ideas worth paying attention to."

For example I find it unlikely that an anonymous writer with good ideas and comments won't be read and discussed on the forum. Maybe it's different on conferences and behind the scences at EA orgs, though?

"iv) By acquiring resources and status EA had drawn the attention of people who were interested in these resources, instead of the mission of EA. These people would damage the epistemic norms by attempting to shift the outcomes of truth-finding processes towards outcomes that would benefit them."

EAs seem to mostly interact with research groups (part of the institution with the best track record in truth-finding) and non-profits. I'm not worried that research groups pose a significant threat to EAs epistemic standards, rather I expect researchers to 1) enrich them and 2) be a good match for altruistic/ethical motivations and being rigorous about this. Examples that come to mind are OpenPhil causing/convincing bioriks researchers to shift their research in the direction of existential threats.

Does someone know of examples or mechanism how non-profits might manipulate or have manipulated discussions? Maybe they find very consequential & self-serving arguments that are very difficult to evaluate? I believe some people think about AI Safety in this way, but my impression is that this issue has enjoyed a lot of scrutiny.

comment by Khorton · 2019-06-23T00:02:37.623Z · score: 7 (4 votes) · EA · GW

I agree that I don't think we ignore good ideas from anonymous posts. I think it's true that we distance ourselves from controversial figures, which might be what OP means by low status?

comment by casebash · 2019-06-23T06:54:47.044Z · score: 2 (1 votes) · EA · GW

I can't say exactly what the people I was talking about meant since I don't want to put words in their mouth, but controversial figures was likely at least part of it.

comment by casebash · 2019-07-11T14:22:25.506Z · score: 2 (1 votes) · EA · GW

"EAs seem to mostly interact with research groups and non-profits" - They were talking more about the kinds of people who are joining effective altruism than the groups we interact with