Finance: everything from bookkeeping to outsourced CFOs Legal: contracts, employment, compliance Tech: software implementation, Salesforce etc
Why would it make sense for there to be EA-specific services for these? All of these services seem like things you can outsource to non-EA firms just fine and benefit little to none from EA knowledge/affiliation/alignment.
Where did/does Lightcone get the money to run?
I think your feelings are genuine, but I'm unfortunately not sure what to do about them besides what I'm already doing, which is try to be empathetic and welcoming.
there is a discussion on twitter that suggests screenshots of the forum are fair game. I disagree - while public, this is a different kind of public than twitter. If screenshots are fair game then rephrasing or retracting is out the window.
I had a conversation with someone that went like this:
Them - "Man, the EA Forum is like if all of EA had a water cooler to chat by"
Me, sarcastic - "Great, yeah, real smart of us to have a water cooler that is surrounded by journalists"
I think this gets at an important point that is pretty stifling / chilling, since the norms we've cultivated may not be upheld in other venues. I think it's important to have these conversations in public so everyone can hear, but there are real large costs to that.
Another option: maybe have a moderated conversation in an offline space and then edit it before publishing?
To be clear, I definitely do think you take women's sadness seriously.
Also I certainly hope nothing I've done has implied that you should agree or shut up - that's not my intention at all.
I really do think benefit of the doubt is important. If you misphrase an idea and then concede that you misphrased it, I will understand that and not change my respect for you. I misphrase ideas all the time.
Yeah I'm just going to retract my comment entirely because it looks like I misunderstood the situation.
I don't think this is a good way to think about it. I do actually think this is a pretty racist way of thinking about it. I guarantee you 100% that the reason wherever you are "lacks diversity" is not because minorities "lack the relevant level of aptitude". And I think disparate impact tests are pretty clearly a good thing.
Yeah, we should probably do something about that. My guess is that Community Health is on this (EDIT: they are on this, sorry I missed that message!)
I imagine there's a few things CH could do if they learn the identity of the offender - my guess is an appropriate reaction would be a warning or maybe just ban them from the next EAG, followed by permanently banning from EAG for repeated offending.
Hey Nathan, thanks for sharing even when it's hard. I'd be curious to hear more about "I think that both parties in this current sexual norms discourse find this discussion exhausting." I think there are tremendously simple norms at play here, from Emma's accounts of EAG in this article:
Don't use Swapcard (or other clearly professional infrastructure) to try to get dates / flirt.
Don't immediately start touching people until there's a clearer context / consent for it. If you're in doubt, either ask or don't touch them.
If someone tells you to stop doing something, stop doing it.
There are definitely a few more norms that should be added to this list.
But I don't think these are too hard or exhausting to think about or follow. And, of course, it goes without saying but I imagine it's way more exhausting for sexual harrassment victims than for non-victims. Curious what I'm missing?
Questioning, doubt, and dissent are discouraged or even punished.
I think this is probably partial, given claims in this post, and positive-agreevote concerns here (though clearly all of the agree voters might be wrong).
I think you may have very high standards? By these standards, I don't think there are any communities at all that would score 0 here.
I think this is nonzero, I think subsets of the community do display "excessively zealous" commitment to a leader given "What would SBF do" stickers. Outside views of LW (or at least older versions of it would probably worry that this was an EY cult.
I was not aware of "What would SBF do" stickers. Hopefully those people feel really dumb now. I definitely know about EY hero worship but I was going to count that towards a separate rationalist/LW cult count instead of the EA cult count.
Ok updated to 0.5. I think "the leader is considered the Messiah or an avatar" being false is fairly important.
My call: EA gets 3.9 out of 14 possible cult points.
The group is focused on a living leader to whom members seem to display excessively zealous, unquestioning commitment.
The group is preoccupied with bringing in new members.
The group is preoccupied with making money.
Questioning, doubt, and dissent are discouraged or even punished.
Mind-numbing techniques (such as meditation, chanting, speaking in tongues, denunciation sessions, debilitating work routines) are used to suppress doubts about the group and its leader(s).
The leadership dictates sometimes in great detail how members should think, act, and feel (for example: members must get permission from leaders to date, change jobs, get married; leaders may prescribe what types of clothes to wear, where to live, how to discipline children, and so forth).
The group is elitist, claiming a special, exalted status for itself, its leader(s), and members (for example: the leader is considered the Messiah or an avatar; the group and/or the leader has a special mission to save humanity).
The group has a polarized us- versus-them mentality, which causes conflict with the wider society.
Very weak (+0.1)
The group's leader is not accountable to any authorities (as are, for example, military commanders and ministers, priests, monks, and rabbis of mainstream denominations).
The group teaches or implies that its supposedly exalted ends justify means that members would have considered unethical before joining the group (for example: collecting money for bogus charities).
The leadership induces guilt feelings in members in order to control them.
Members' subservience to the group causes them to cut ties with family and friends, and to give up personal goals and activities that were of interest before joining the group.
Members are expected to devote inordinate amounts of time to the group.
Members are encouraged or required to live and/or socialize only with other group members.
I think your reply is pretty heavily based on deciding between neuroscience PhD and CS PhD, but my guess is >80% likely the best move is to not get a PhD at all.
It's totally plausible that one rogue clergyperson (employee) or congregation (project) could incur enough liability to overwhelm a single corporation's insurance and consume the assets of the other 11 congregations and the central office.
Yeah I think this is a really plausible risk to centralization.
if RP Special Projects grew and/or took on riskier projects, I would at least consider spinning it off into a wholly owned subsidiary of RP.
This is definitely something we are considering doing.
On the first point, if FTX had happened and there were more large EA organisations, it would have been easier to handle the fall out from that, with more places for smaller organisations and individuals to go to for support.
Right, I totally agree with that. I think the way FTX Future Fund pushed their risk onto individuals who were not well equipped to understand and/or handle that was a huge failing and is very sad.
On the last point it seems like that was a part of why DARPA had success, they had lots of projects and were focused on the best succeeding rather than maintaining failing ideas.
Agreed. That's part of what I'm trying to do with Rethink Priorities. But I think there are many ways in which we fail to live up to that, and my guess is that other big orgs maintain cost-ineffective projects due to inertia or political pressure because it's really hard to be ruthless about these sorts of things.
It seems like the main point of the original post and the comments are about how more centralization is helpful. For balance, I want to argue against myself and while I think there are clear benefits to net centralization, there are also some reasons/ways net centralization may be harmful:
You are consolidating legal risk into fewer entities, meaning that one high-level mistake can take a lot of things down (in the wake of FTX this seems extra important... I think this is the biggest drawback to more centralization)
Mainly due to the above but also other factors, larger organizations are much more risk averse and can just do fewer things
Smaller orgs/individuals who don't have to care about their reputation as much can take bolder risks (this is both good and bad)
Smaller organizations are quicker to act, require less stakeholder sign-off to get things done (this is both good and bad)
Smaller organizations I guess on the margin are more able to shut down with fewer politics if things don't work out rather than continuing to do something not effective (but on the other hand maybe shutting down more clearly means you are fired and don't get money whereas in a bigger org maybe you can be moved to a new project?)
I think a lot of the stuff Deena touches on in "3 Basic Steps to Reduce Personal Liability as an Org Leader" are important here too. I think de-centralization has lead to a lot of people doing work independently and then being really under-resourced to handle the pressures of that (and this went 100x in particular response to FTX). I think grantees-as-individuals need to be very careful about not co-mingling funds, making sure taxes are in order, etc. and our current community plan of having a lot of individual grantees may involve getting people to be taking on a lot of legal risk that bigger organizations are in a better position to handle.
I think especially during the FTX era but still now, I have been a bit surprised to see a bias towards wanting to fund many smaller orgs rather than one bigger org. FTX had an explicit preference for newer and less established orgs / individuals and I think that clearly backfired. Some of this makes sense as you want to avoid having "all of your eggs in one basket" / "hedge your bets" but I think big orgs have a lot of great advantages that are underrated by EA funders and others.
Disclaimer: Obviously I would say this though given that I run a "big org" (Rethink Priorities). I'm speaking for just myself personally here though, not some RP position (there's a lot of diversity of perspective at RP). Also I am complaining about "funders" here but I am on the EA Infrastructure Fund so maybe I'm part of the problem?
Damn, Alexa sounds like an incredible person. I'm so sorry for your loss. Thank you for sharing more of her with us.
I agree. I've explored some of this. I think it definitely could make sense in a lot of cases.
Thanks - that's a good point.
Yeah I was definitely using the word "proof" colloquially and not literally. My understanding from inside info though is that FHI's issues with Oxford have very little to do with their choice of research agenda. I think this is also clear from outside info (FHI had a similar research agenda for a long time and had university support).
All of the people mentioned joined a long time ago and all but Sandberg have left GPI. Is there anyone of a comparable quality that joined in the last 5 years?
Just two quick nitpicks: I think you mean "FHI" not "GPI". And I think Drexler is still at FHI in addition to Sandberg. But you're right that ASB, Owain, and Stuart Armstrong have left FHI.
GPI is a pretty clear existence proof that while collaborating with universities is difficult and costly, it can be done.
No, it was me who got this wrong. Thanks!
We've had multiple big newspaper attacks now. How'd we do compared to your expectations?
"Chesterton's TAP" is the most rationalist buzzword thing I've ever heard LOL, but I am putting together that what Chana said is that she'd like there to be some way for people to automatically notice (the trigger action pattern) when they might be adopting an abnormal/atypical governance plan and then reconsider whether the "normal" governance plan may be that way for a good reason even if we don't immediately know what that reason is (the Chesterton's fence)?
Given how the Oxford group has become the most relevant internationally in the academic research of X-risk is hard to argue against his tenure.
Disagree. The relevance here is what FHI will accomplish in the future, not what it has accomplished in the past. And it seems clear that it is not hard to argue against his tenure as people are clearly doing just that.
Beyond that, my main claim is about the mail incident.
Disagree with you as well, but I am going to stand by my desire to not relitigate the apology here and instead defer that conversation to other threads.
I just want to add that I can't think of anyone denying (1) - that there are actual observed differences in IQ tests between races. None of the people ragging on Bostrom are denying this. So the fact that Rutherford and Bostrom agree on (1) is entirely irrelevant and unsurprising. I think the main disagreement is on (2) and way more importantly (3).
I personally agree with titotal that taking a statement like "there are currently differences in average IQ test score between races, for a variety of reasons, primarily racism and it's legacy", and reducing it to "blacks are stupider than whites" is - in titotal's words "stripping away all the context from a complex issue into a gross simplification with a negative spin that furthers a racist narrative". I don't really see what we gain from doing that or why that somehow is cool / should be protected / should be celebrated. I think that's the main crux.
My understanding is that this is indeed unique to FHI, unfortunately. This is maybe why FHI and GPI make for a compelling comparison - both are EA-affiliated, both are in the University of Oxford. While working with a University is never easy, GPI seems totally fine and indeed does continue to hire, run events, etc. FHI does not.
I don't know about the alternatives to Bostrom or how likely they would be to change the situation. Nathan makes a good point that perhaps prediction markets could play a role here. I generally think that, given I run an EA research org that could be construed as competing with FHI for funding/talent/influence/etc. I shouldn't really engage in explicitly calling for Bostrom to step down or help analyze the alternatives. But hopefully I can more generally help people think through the situation more clearly as a whole. I mainly wrote what I wrote because the comment made me angry enough that I felt like I had to.
I hesitate to weigh in here but I really don't think this is a good way of thinking about it.
I'm certainly not trying to "bully" Bostrom and I don't view the author of this post as trying to "bully" Bostrom either. If Bostrom were to step down as Director, I don't see that as somehow a "win" for "bullying", whatever that means.
I do agree that being able to come up with important and useful ideas requires feelings of safety and for this reason and others I always want to give people the benefit of the doubt when they express themselves. Moreover, I understand that in a social movement made up of thousands of people, you are not going to be able to find common agreement on every issue and in order to make progress we need to find some way to deal with that. So I am pretty sympathetic to the view that Bostrom deserves some form of generalized protection even if he's said colossally stupid things.
But - to be clear - no one I know is trying to get Bostrom fired or expelled or cancelled or jailed or anything. He still could have a very cushy, high status, independent non-cancelled life as a "FHI senior researcher", even if he weren't Director. The question is - should he be Director?
My understanding of the view of the author of this post is that:
(1) FHI is probably useful and important and does good things for the world,
(2) FHI would probably be more useful and more important and do more good things for the world if it had a really great Director,
and (3) Bostrom is not a really great Director (at least going forward in expectation).
The alleged "significant mismanagement" seems like great evidence for (3). This is just basic consequentialist reasoning that I think all orgs - especially those that claim to be affiliated with effective altruism - engage in. I'd happily welcome people write "Peter Wildeford should step down as Co-CEO of Rethink Priorities" if there indeed were good reasons for me to do so.
So I certainly find it overdramatic at best to take "here are a few reasons why Bostrom would not be the ideal leader of FHI going forward" and convert it to "all original thinkers have a Sword of Damocles hanging over their head, knowing they might be denounced and fired if that became politically expedient". Being a good leader means things like being able to communicate well and understand when your actions will have predictably bad consequences, avoid making everyone really uncomfortable about working with you, and avoid getting your organization to the point where you can't hire anyone and your operations staff and other key leadership quit. To be frank - a lot of Bostrom's research is great and I'm very grateful to him for a lot of it, but this benchmark is just something Bostrom isn't accomplishing and I think independent researcher life would suit him better and be a win-win for everyone.
1 and 2 are very good points, thanks.
re 3: It's also not out of the question that they could just aim to have an open (or private) hiring round for a new Director, perhaps with Ord or Sandberg as Interim/Acting Director in the meantime.
Does being the best philosopher of our time and the main founder or longtermism mean that he is the best person to run FHI as a Director? I don't really see the relevance.
I strongly disagree with you but yeah I really don't think we need to re-litigate the apology. There are lots of other threads for that.
I think it was meant to link to my piece: "Why I'm personally upset with Nick Bostrom right now".
Mervin makes a great point that it is hard to compare GPI to FHI in general. But I also think comparing past FHI and past GPI is not the right way of thinking about it - instead we want to compare current/expected future FHI to current/expected future GPI. And the fact of the matter is quite clear that current/expected future GPI still can actually hire people, engage in productive research work, and maintain a relationship with the university whereas current/expected future FHI I think can best be described as "basically dead".
My analysis suggests that the experts did a fairly good job of forecasting (Brier score = 0.21), and would have been less accurate if they had predicted each development in AI to generally come, by a factor of 1.5, later (Brier score = 0.26) or sooner (Brier score = 0.29) than they actually predicted.
Important missing context from this is that Brier score = 0.25 is what you would get if you predicted randomly (i.e., put 50% on everything no matter what, or assigned 0% or 100% by coin flip). So that means here that systematically predicting "later" or "sooner" would make you "worse than random" whereas the actual predictions are "better than random" (though not by much).
So I think the main takeaways are: (1) predicting this AI stuff is very hard and (2) we are at least not systematically biased as far as we can tell so far in predicting progress too slow or too fast. (1) is not great but (2) is at least reassuring.
What's a TAP? I'm still not really sure what you're saying.
What's a "Chesterton's TAP"?
I appreciate you raising this despite not having an actual view on the topic and I appreciate you being clear that this is a complex topic that's hard to form a view on.
I think I had a lot of freewheeling conversations at EAG and I don't think I thought enough about the fact that journalists I don't trust by default might be able to overhear and comment on those conversations and that thinking through this may have a somewhat chilling effect on how I interact in future EAGs, which I find to be unfortuntate.
That being said, I totally agree with you that excluding these journalists may also be unfair or otherwise based on bad norms, and it's a pretty thorny trade-off. Like you, this is something I don't really fully understand.
I do agree there's a wide spectrum of what "disclosing this" looks like and I think it's entirely possible that you did disclose it enough or maybe even disclosed it more than enough (for example, if perhaps we conclude it didn't need to be disclosed at all, then you did more than necessary). I think - like Keller - I don't really have a view on this. But I think the level of disclosure you did do is also entirely possible to be pretty inadequate (again I'm genuinely not sure) given that is on page 9 of a guide I imagine most people don't read (I didn't). But I imagine you agree with this.
SOL didn't close - it just failed to open and my understanding was this was entirely due to financing falling through.
Lightcone on the other hand does not have any financial issues to the best of my limited knowledge, but chose to close due to a change in strategy.
These two situations seem very different.
I'm not aware of any other office situations changing but it definitely makes sense that office strategy in general would be affected by a decline in available assets for offices. I expect this to continue case-by-case.
I really appreciate how thoughtful you've been about this, including sensitivity to downside risks. Do you have any plans to monitor the downside risks? A lot of them seem quite verifiable/testable.
I definitely agree that NYC is a very compelling location too. Best of luck with EAGxNYC and I'll see if I can attend.
I could definitely see the EAG East Coast alternating between Boston and DC every other year. I have nothing against Boston and I think it is also a great place for an EAG and I realize it is a very difficult choice if you can only pick one.
The idea of a professional suit-tie EAGxDC with significant policy engagement (perhaps not even branded as "EAG" as all but something else) is pretty appealing to me.
Great to see this announcement! Curious to hear if there are any plans for a Washington DC-based event?
Thank you. I am still considerably unhappy with how this situation was handled but I accept Julia's apology and I am glad to see this did come to some sort of resolution. I'm especially glad to see an independent investigation into how this was handled.
Wow, blast from the past!
The uniform prior case just generalizes to Laplace's Law of Succession, right?
Yeah this is basically what I was trying to say in my comment.
Hey Max, I really want to thank you again for everything you've done for CEA, RP, myself, and the broader EA movement. I also am really proud of you for recognizing when you're not in the right space for the role. I think you're a very positive role model to demonstrate to others that it's totally best to prioritize your own health, even when it seems like you're really hard to replace. I look forward to working with the next ED to make EA a great space and I'm glad you'll still be in an advisory capccity.
Thanks Chana. I'm glad we can both see each other's perspectives. I look forward to hearing more next week. Committing to a response and a rough timeline is already very helpful.