I worked at FHI as a research scholar from 2018-2020. At that time I didn't hear anyone saying that Bostrom should step down (and I definitely didn't think he should).
To be clear, it has been obvious to everyone that FHI has had severe operations/logistical issues. However, it's much less clear if or how FHI would function without Nick Bostrom.
I'm pretty nervous about rumors in situations like this.
If I were in charge of making any decision here, I'd send out surveys and have a bunch of conversations.
I think good startups often do this, but lots of startups have trouble about this stage. Many do have their own cultures that are difficult to retain as they grow.
I think EA is more intense as there's more required material to understand, but it's a similar idea.
I agree that management doesn't get much benefit by giving valuable public negative feedback to people. However, I'd push back on the idea that management can "just fire" people they don't like.
Many managers are middle managers. They likely have a lot of gripes with their teams, but they need to work with someone, and often, it would be incredibly awkward or controversial to fire a lot of people.
Thanks! I tried going a bit more into detail in point 2 on the previous post.
Thanks for the point. I also had someone else make a similar comment in the draft, I should have expected others to raise it as well.
Good point. I was trying to keep this post focused on one specific bottleneck of criticism, I definitely agree there are others too.
I added the following text, to clarify:
To be clear, there are many bottlenecks between "someone is in a place to come up with a valuable critique" and "different decisions actually get made." This process is costly and precarious at each step. For instance, decision makers think in very different ways than critics realize, so it's easy for critics to waste a lot of time writing to them.
This post just focuses on the challenges that come from the challenges of things being uncomfortable to say. Going through the entire pipeline would require far more words.
I appreciate the thought, but personally really don't see this as a mistake on ConcernedEAs.
I actually pushed that post a few days back so that it wouldn't conflict with Owen's, trying to catch some narrow window when there aren't any new scandals. (I'm trying not to overlap closely with scandals, mainly so it doesn't seem like I'm directly addressing any scandal, and to not seem disrespectful).
I think if we all tried timing posts to be after both scandals and related posts, then we'd develop a long backlog of posts that would be annoying to manage.
I'm happy promoting norms where it's totally fine to post things sooner than later.
It's a bit frustrating that the Community frontpage section just shows 3 items, but that's not any of our faults. (And I get why the EA Forum team did this)
I defer a lot to experts / well respected managers.
To me, EA has a bunch of young people optimized a lot for some specific non-management talents. It seems a lot like a startup in that way.
Many startups go through "growing up" periods. Some totally fail at this, but when it works well, the outcome can be very successful.
I imagine as we get good consultants here, they will recommend some fairly straightforward and correlated recommendations that I'd agree with.
I found the Personal MBA reading list to be interesting. There are really a lot of "serious organization" skills that are hard to get good at.
This seems a lot like satire to me, title definitely implies that.
Happy to see this, thanks for putting it together.
For what it's worth, I roughly agree with a lot of this. I personally see EA challenges now very much as "maturing in management, generally" as opposed to anything very specific, like, "stopping a few bad actors".
I expect that many senior people roughly would agree.
I believe anyone who pitches people to participate in circling with others who are pretty much strangers to them (and not super-carefully-vetted) and applies implicit peer pressure and doesn't warn them that this sort of thing can be psychologically risky and unsafe, is either dangerously clueless or a bad actor.
For what it's worth, I live in the Bay Area, where there are large spirituality communities and surprisingly related "professional development" communities. These practices seem surprisingly normal in these communities.
I think that the leaders of these groups are typically very overconfident in their approaches, are a bit desperate to sell them, and not very epistemically sophisticated, so very rarely give adequate warnings and help.
This seems really unrelated to Owen, but because I saw this, I'd flag I also went to a circling retreat in Oxford around that time, it might have been the same one.
I found to be personally fairly uninteresting, and got weird vibes from the instructor. In a discussion that Friday (the first day), he mentioned a lot of metamodernism stuff including a lot of stuff by Ken Wiber. Spirituality vibes similar to what I know of some communities in the Bay.
I did some online searching that evening, and found some reports of sexual harassment and similar around the upper parts of Circling Europe.
My general impression is something like, "Issues of sexual harassment and similar are just endemic in alternative communities."
I know lots of other people I respect have gotten valuable things from circling and similar circling retreats. I've also done a bit of circling without the official mediators and found it to be mostly fine.
I just attended on the first day, and decided not to join for the next two. (That said, in fairness, I find incredibly few activities better than my best non-retreat activities, so this itself isn't saying much).
At the one I was at, maybe 20% of the group seemed like it was EAs, I don't remember specifically.
After thinking about it more, I decided that I was wrong, and changed it accordingly.
Thanks for the comment!
Related to using the Virtue of Discernment:
Instead of asking, "Is it net good or net bad", I think it's much more interesting to catalogue and understand all the ways it's both good and bad.
Some negative takeaways:
- OpenAI & Microsoft are bullish on releasing risky technologies quickly.
- The market seems to encourage this behavior.
- Google seems like it's been encouraged to do similar work, faster.
- Likely to inspire more people to invest in this sort of thing and make companies in the space.
Good things (as you mention):
- Really good for failures to happen publicly
- Might be indicative of a slow takeoff. My hunch is that we generally want as much AI progress to happen as possible before any hard takeoff, though I'd prefer it all to happen more slowly than quickly.
Maybe, I'm not sure. I think the "good communicator" line is decent, it seems very possible the "bad communicator" should be the other way.
+1 for clarification. It could be neat if you could use a standard diagram to pinpoint what sort of criticism each one is.
For example, see this one from Astral Codex Ten.
After listening to the rest of that post with James, I'll flag that while I agree that "EA is a lot like what many would call an ideology", I disagree with some of the content in the second half.
I think using tools like ethnography, agent based modeling, and Phenomenology, could be neat, but to me, they're pretty low-priority in improvements to EA now. I'd imagine it could take some serious effort in any ($200k? $300? Someone strong would have to come along with a proposal first) to produce something that really changes decision making, and I can think of other things I'd prefer that money be spent on.
There seems to be some assumption that the reason why such actions weren't taken by EA was because EAs weren't at all familiar and didn't read James' post. I think that often a more likely reason is just because it's a lot of work to do things, we have limited resources, and we have a lot of other really important initiatives to do. Often decision makers have a decent sense of a lot of potential actions, and have decided against them for decent reasons.
Similarly, I don't feel like the argument brought forth against the use of the word "aligned" when discussing a person was very useful. In that case I would have liked for you to have tried to really pin things down on what a good solution would look like. I think it's really easy to error on the side of "overfit on specific background beliefs" or "underfit on specific background beliefs", and tricky to strike a balance.
My impression is that critics of "EA Orthodoxy" basically always have some orthodoxy of their own. For example, I imagine few would say we should openly welcome Nazi sympathizers, as an extreme example. If they really have no orthodoxy, and are okay with absolutely any position, I'd find this itself an extreme and unusual position that almost all listeners would disagree with.
Some quick thoughts, poorly structured:
- I like seeing more attempts at understanding “EA Critiques” / ways of improving EA.
- I think the timing that this is being released is inconvenient, but I don’t blame you.
- Personally I feel exhausted by the last few months of what I felt like was much some firestorm of angry criticism. Much of it, mainly from the media and Twitter, feels like it was very antagonistic and in poor taste. At the same time, I think our movement has a whole lot of improvement to do.
- As with all critiques, I am emotionally nervous about it being used as “cheap ammunition” for groups that just to hate on EA.
- Personally, I very much side with James already on the Ideology question. I think Helen’s post was pretty bad. I’m not sure how much Helens post represent “core EA understanding”, and as such, the attack on it feels a bit less like “EA criticism”, than “regular forum content”. However, this might well be nitpicking. I listened to around half of this so far and found it reasonable (as expected, as I also agreed with the blog post).
- I think issues around critique can still be really valuable. But I also think they (unfortunately) need to be handled more carefully than some other stuff we do. I’ll see about writing more about this later.
- My guess is that 70%+ of critiques are pretty bad (as is the case for most fields). I’d likewise be curious about your ability to push back on the bad stuff, or maybe better, to draw out information to highlight potential issues. Frustratingly though, I imagine people will join your podcast and share things in inverse proportion to how much you call them out. (This is a big challenge podcasts have)
- I suggest monitoring Twitter. If people do take parts of your podcast out of context and do bad things with them, keep an eye out and try to clarify things.
- Good luck!
I agree with the thrust of this. To me, much of the issue has to do with the coarseness of the ontology of interventions that we use.
Thins like Discernment can help break this down.
By chance have you considered/investigated AI friends?
My impression is that they could be a really big deal. Possibly really net-bad, possibly really net-good.
There are audio version in the Substack. I can see about adding them to the EA Forum more directly in the future.
Similar. I think I'm happy for QURI to be listed if it's deemed useful.
Also though, I think that sharing information is generally a good thing, this type included.
More transparency here seems pretty good to me. That said, I get that some people really hate public rankings, especially in the early stages of them.
This is very interesting, really happy to see this. As normal, I think it's good to take these with a big grain of salt - but I'm happy to get any halfway-reasonable attempt at a starting point.
One big issue here is that the boundaries are for the 25th/50th/75th percentiles. I would have expected many of there extrapolations to get much wilder (either doom or utopia), but maybe much of that is outside these percentiles.
Even then though, I imagine many readers around here might give >25% odds to at least one of "discontinuous benefit or catastrophic harm", by 2122. 2122 is a really long time.
Many of the confidence bands seem to grow linearly over time, instead of exponentially or similar. This is surprising to me.
One point: I would be pretty enthusiastic about people making "meta-predictions", treating these as baselines. For instance, "In 5 years, these estimates be revised. The difference will be less than 20%. This includes estimates in these 5 years".
That way, onlookers could make quick forecasts on "how correct this set of forecasts" is, using simpler (not time-series) methods.
It seems like a bunch of care/preparation went into having good questions, so I think here I'd have a lot of trust in the interviewer's brief.
Just fli - in this case, we spent some time in the beginning making a very rough outline of what would be good to talk about. Much of this is stuff Eli put forward. I've also known Eli for a while, so had a lot of context going in.
Same for QURI (Assuming OP ever evaluates/funds QURI)
For those who go through this, I'm really curious how important the transcript was.
In terms of (marginal) work, this was something like:
- In person prep+setup: 3 hours
- Recording: 1.5 hours
- Editing: ~$300, plus 4 hours of my time
- Transcription: $140, plus around ~5 hours of our team's time.
(There was also a lot of time in me sort of messing around and learning the various pieces, but much of that could be later improved. Also, I was really aggressive on removing filler words and pauses. I think this is unusual, in part because it's resource-intensive to do well. )
I'd like to do something like, "Only do transcripts for videos that get 50 upvotes, or we are pretty sure will get 50 upvotes", but I'm not sure. (My guess is that poor transcripts, which means almost anything that takes less than ~$200/3 hours time, will barely be good enough to be useful)
Glad you liked it!
I'll see about future videos with him.
I'll flag that if others viewing this have more suggestions, or would like to just talk about your takes on things like this, publicly, do message me.
The transcripts are pretty annoying to do (the hardest labor-intensive part to outsource), but the rest isn't that bad.
Yea, I assume the full version is impossible. But maybe there are at least some simpler statements that can be inferred? Like, "<10% of transformative AI by 2030."
I'd be really curious to get a better read on what market specialists around this area (maybe select hedge fund teams around tech disruption?) would think.
This seems pretty neat, kudos for organizing all of this!
I haven't read through the entire report. Is there any extrapolation based on market data or outreach? I see arguments about market actors not seeing to have close timelines, as the main argument that timelines are at least 30+ years out.
I earlier gave some feedback on this, but more recently spent more time with it. I sent these comments to Nuno, and thought they could also be interesting to people here.
- I think it’s pretty strong and important (as in, an important topic).
- The first half in particular seems pretty dense. I could imagine some rewriting making it more understandable.
- Many of the key points seem more encompassing than just AI. “Selection effects”, “being in the Bay Area” / “community epistemic problems”. I think I’d wish these could be presented as separate posts than linked to here (and other places), but I get this isn’t super possible.
- I think some of the main ideas in the point above aren’t named too well. If it were me, I’d probably use the word “convenience” a lot, but I realize that’s niche now.
- I really would like more work really figuring out what we should expect of AI in the next 20 years or so. I feel like your post was more like, “a lot of this extremist thinking seems fishy”, more than it was “here’s a model of what will happen and why”. This is fine for this post, but I’m interested in the latter.
- I think I mentioned this earlier, but I think CFAR was pretty useful to me and a bunch of others. I think there was definitely a faction that wanted them to be much more aggressive on AI, and didn’t really see the point of donating to them besides that. I think my take is that the team was pretty amateur at a lot of key organizational/management things, so did some slipper work/strategy. That said, there was much less money then. There wasn’t a whole lot of great talent for such things. I think they were pretty overvalued at the time to rationalists, but I would consider them undervalued, in terms of what EAs tend to think of them as.
- The diagrams could be improved. At least, bold/highlight the words “for” and “against. I’m also not sure if the different size blocks are really important
There was some discussion of this here: https://forum.effectivealtruism.org/posts/jRJyjdqqtpwydcieK/ea-could-use-better-internal-communications-infrastructure
I'd recommend splitting these up into different answers, for scoring. I imagine this community is much more interested in some of these groups than others.
> Finally, we ask that people upvote or downvote this post on the basis of whether they believe it to have made a useful contribution to the conversation, rather than whether they agree with all of our critiques.
I think this makes a false dilemma, and recommends what seems like an unusual standard that other posts probably don't have.
"believe it to have made a useful contribution to the conversation" -> This seems like arguably a really low bar to me. I think that many posts, even bad ones, did something useful to the conversation.
"whether they agree with all of our critiques." -> I never agree with all of basically any post.
I think that more fair standards of voting would be things more like:
"Do I generally agree with these arguments?"
"Do I think that this post, as a whole, is something I want community members to pay attention to, relative to other posts?"
Sadly we don't yet have separate "vote vs. agreement" markers for posts, but I think those would be really useful here.
Portions of this reform package sound to my ears like the dismantling of EA and its replacement with a new movement, Democratic Altruism ("DA")
I like the choice to distill this into a specific cluster.
I think this full post definitely portrays a very different vision of EA than what we have, and than what I think many current EAs want. It seems like some particular cluster of this community might be in one camp, in favor of this vision.
If that were the case, I would also be interested in this being experimented with, by some cluster. Maybe even make a distinct tag, "Democratic Altruism" to help organize conversation on it. People in this camp might be most encouraged to directly try some of these proposals themselves.
I imagine there would be a lot of work to really put forward a strong idea of what a larger "Democratic Altruism" would look like, and also, there would be a lengthy debate on its strengths and weaknesses.
Right now I feel like I keep on seeing similar ideas here being argued again and again, without much organization.
(That said, I imagine any name should come from the group advocating this vision)
I imagine you'd also likely agree that these proposals tradeoff against everything else that the EA orgs could be doing, and it's not super clear any are the best option to pursue relative to other goals right now.
Of course. Very few proposals I come up with are a good idea for myself, let alone others, to really pursue.
I think there's probably a bunch of different ways to incorporate voting. Many would be bad, some good.
Some types of things I could see being interesting:
- Many EAs vote on "Community delegates" that have certain privileges around EA community decisions.
- There could be certain funding groups that incorporate voting, roughly in proportion to the amounts donated. This would probably need some inside group to clear funding targets (making sure they don't have any confidential baggage/risks) before getting proposed.
- EAs vote directly on new potential EA Forum features / changes.
- We focus more on community polling, and EA leaders pay attention to these. This is very soft, but could still be useful.
- EAs vote on questions for EA leaders to answer, in yearly/regular events.
Thanks! I definitely agree that improvement would be really great.
If others reading this have suggestions of other community examples, that would also be appreciated!
First, I want to say that I really like seeing criticism that's well organized and presented like this. It's often not fun to be criticized, but the much scarier thing is for no one to care in the first place.
This post was clearly a great deal of work, and I'm happy to see so many points organized and cited.
I obviously feel pretty bad about this situation where, several people all felt like they had to do this in secret in order to feel safe. I think tensions around these issues feel much more heated than I'd like them to. Most of the specific points and proposals seem like things that in a slightly different world, all sides could feel much more chill discussing.
I'm personally in a weird position, where I don't feel like one of the main EAs who make decisions (outside of maybe RP), but I've been around for a while and know some of them. I did some grantmaking, and now am working on an org that tries to help figure out how to improve community epistemics (QURI).
Some Quick Impressions
I think one big division I see in discussions like this, is that between:
- What's in the best interest of EA leadership/funding, conditional on them not dramatically changing their beliefs about key things (this might be very unlikely).
- What's an ~irreconcilable different opinion (with a reasonable time of debate/investigation, say, a few days of solid reading).
Bucket 1 is more about convincing and informing each other. The way to make progress there is by deeply understanding those with power, and explaining how it helps their goals.
Bucket 2 is more about relative power. No two people are perfectly aligned, even after years of deliberation. Frustratingly, the main ways to make progress here are to either move it from some players to others, or doing things like just making power moves (taking actions that help your interests, in comparison to other stakeholders).
Right now, in EA, the vast majority of funding (and thus control) ultimately comes from one source. This is a really uncomfortable position, in many ways.
However, other members of the community clearly have some power. They could do some nice things like write friendly posts, or some not so nice things (think of strikes) like leaking information or complaining/ranting to antagonistic journalists.
I imagine that eventually we could find better ways to do group bargaining, like some sort of voting system (similar to what you recommend).
Back to this post, I think that some of the way this post is written reminds me of "lists of demands" that I'm used to seeing in fairly antagonistic negotiations, in the style of Bucket 2.
My guess is that this wasn't your intention. Given that it's so long (and must have involved a lot of coordination to write), I could definitely sympathize with "let's just get it out there" instead of making sure it's style is optimized for Bucket 1 (if that was your intention). That said, If I were a grantmaker now, I could easily see myself putting this in my "some PR fire to deal with" bucket rather than "some useful information for me to eventually spend time with".
By chance, can you suggest any communities that you think do a good job here?
I'm curious who we could learn from.
Or is it like, "EAs are bad, but so are most communities." (This is my current guess at what I believe)
Thanks for the post, I found that interesting!
Sorry you felt like you'd make mistakes here. We all make mistakes, I make them constantly.
I look forward to your future posts.
I think criticism is really complicated and multifaceted, and we have yet to develop nuanced takes on how it works and how to best use it. (I've been doing some thinking here).
I know that orgs do take some criticism/feedback very seriously (some get a lot of this!), and also get annoyed or ignore a lot of other criticism. (There's a lot of bad stuff, and it's hard to tell what has truth behind it).
One big challenge is that it's pretty hard to do things. Like, it's easy to suggest, "This org should do this neat project", but orgs are often very limited in what things they could do at all, let alone what unusual things or things they aren't already thinking about and good at they could do.
There's definitely more learning to do here.
On Democratic Proposals - I think that more "Decision making based on democratic principles" is a good way of managing situations where power is distributed. In general, I think of democracy as "how to distribute power among a bunch of people".
I'm much less convinced about it as a straightforward tool of better decision making.
I think things like Deliberative Democracy are interesting, but I don't feel like I've seen many successes.
I know of very little use of these methods in startups, hedge funds, and other organizations that are generally incentivized to use the best decision making techniques.
To be clear, I'd still be interested in more experimentation around Deliberative Democracy methods for decision quality, it's just that the area still seems very young and experimental to me.
Prizes for commenters that do "moderator" activities.
- Clarifying the opinions of people.
- Politely explaining to difficult people conversational norms.
- Making conversation inviting and friendly.
Thanks! I was planning on forwarding this to you, happy you saw it earlier :)
Yea. I like the idea of more/better moderation. I would note that it's a pretty thankless+exhausting job (many thanks to the current mods), so one big challenge is finding people strong enough, trusted enough, and willing to do it.
Blog posts on the EA Forum outlining the incentives and reasons for such a heated environment
EAs read literature into good conversational norms
We bring in a professional moderator (like, a marriage therapist), to help oversee some of the discussion online.