Posts
Comments
This post argues that:
- Bostrom's micromanagement has led to FHI having staff retention problems.
- Under his leadership, there have been considerable tensions with Oxford University and a hiring freeze.
- In his racist apology, Bostrom failed to display tact, wisdom and awareness.
- Furthermore, this apology has created a breach between FHI and its closest collaborators and funders.
- Both the mismanagement of staff and the tactless apology caused researchers to renounce.
While I'd love for FHI staff to comment and add more context, all of this matches my impressions.
Given this, I stand with the message of the post. Bostrom has been a better researcher than administrator, and it would make sense for him to focus on what he does best. I'd recommend Bostrom and FHI consider having Bostrom step down as director.
Edit: Sean adds a valuable perspective that I highly recommend reading, highlighting Bostrom's contributions to creating a unique research environment. He suggests co-directorship as an alternative to consider to Bostrom stepping down.
Don't get me wrong, I just think this is an extremely uncharitable and confusing way of presenting your work.
I think it's otherwise a great collection of coherence theorems and the discussion about completeness seems alright, though I haven't read closely.
My quick take after skimming: I am quite confused about this post.
Of course the VNM theorem IS a coherence theorem.
How... could it not be a coherence theorem?
It tells you that actors following four intuitive properties can be represented as utility maximisers. We can quibble about the properties, but the result sounds important regardless for understanding agency!
The same reasoning could be applied to argue that Arrow's Impossibility Theorem is Not Really About Voting. After all, we are just introducing all these assumptions about what good voting looks like!
Not central to the argument, but I feel someone should be linking here to Garrabrant's rejection of the independence axiom, which is fairly compelling IMO.
Thank you Lizka, this is really good feedback.
I'd personally err towards different subsections rather than different tabs, but glad to see you experimenting to help EA focus on more object level issues!
Here is a write up of the organisation vision one year ago:
Not sure why the link above is not working for you. Here is the link again:
If you want to support work in other contexts, Riesgos Catastróficos Globales is working on improving GCR management in Spain and Latin America.
I believe this project can improve food security in nuclear winter (tropical countries are very promising as last-resort global food producers), biosecurity vigilance (the recent H5N1 episode happened in Spain and there are some easy improvements to biosec in LatAm) and potentially AI policy in Spain.
Funding is very constrained, we currently have runway until May, and each $10k extends the runway by one month.
We are working on a way to receive funds with our new fiscal sponsor, though we can already facilitate a donation if you write to info@riesgoscatastroficosglobales.com.
(disclaimer: I am a co-founder of the org and acting as interim director)
Have you read this report yet? https://forum.effectivealtruism.org/posts/dQhjwHA7LhfE8YpYF/prediction-markets-in-the-corporate-setting
FWIW here are a few pieces of uninformed evidence about Atlas Fellowship. This is scattered, biased and unfair; do not take it seriously.
- I have a lot of faith in Jonas Vollmer as a leader of the project, and stories like Habryka's tea table make me think that he is doing a good job of overseeing the project expenses
- I have heard other rumours in SF about outrageous expenses like a $100k statue (this sounds ridiculous so I probably misheard?) or spending a lot of money on buying and reforming a venue
- I have also heard rumours about a carefree attitude towards money in general, and the staff transmitting that to the alumni
- I've also heard someone involved in the project complain about mismanagement and being overworked
- I'm surprised that the fellowships seem to be offered unconditionally - having been involved in many talent camps I'd be surprised if it raises the application quality much, and it seems that you can have my h better discretion after the summer program. But Jonas has experience grant making and finding talent, so maybe all the relevant screening happened before the project (?).
My impression of the project remains positive, and this is mostly driven by the involvement of Jonas.
On the other hand, from the description on paper I think it's probably less cost effective and more risky than other efforts like Carreras con Impacto or SPARC.
I'd be curious to hear more from the Atlas alumni and staff about how they think the project went/is going however.
I don't think it's impossible - you could start from Harperin's et al basic setup [1] and plug in some numbers about p doom, the long rate growth rate etc and get a market opinion.
I would also be interested in seeing the analysis of hedge fund experts and others. In our cursory lit review we didn't come across any which was readily quantifiable (would love to learn if there is one!).
I am not sure I follow 100%: is your point that the WBE path is disjunctive from others?
Note that many of the other models are implicitly considering WBE, eg the outside view models.
Extracting a full probability distribution from eg real interest rates requires multiple assumptions about eg GDP growth rates after TAI, so AFAIK nobody has done that exercise.
$420 per placement is insanely good cost-effectiveness!
In contrast, we spend ~$8000 per new hire at Epoch on evaluations.
If the process significantly alleviates the vetting burden on the orgs I am pretty impressed.
Very excited to see the progress of this org!
Quick guide on the agree/disagree voting system:
- When you upvote a post/comment, you are recommending that more people ought to read and engage with it.
- When you agree vote a post/comment, you are communicating that you endorse its conclusions/recommendations.
- Symmetrically, if you downvote a post/comment you are recommending against engaging with it.
- And similarly, if you disagree vote a post/comment you are communicating that you don't endorse it's conclusions/recommendations.
Upvotes determine the order of posts of comments and determine which comments are automatically hidden, so have a measurable effect on how many people read them.
Agree votes AFAIK do not affect content recommendations, but are helpful to understand whether there is community support for a conclusion, and if so in which direction.
Ways of engaging #4: making a database of experts in fields who are happy to review papers and reports from EAs
Ways of engaging #3: inviting experts from fields to EAG(X)s
Ways of engaging #2: proactively offering funding experts from respective fields to work on EA relevant topics
Ways of engaging #1: literature reviews and introductions of each field for an EA audience.
More transparency about money flows seems important for preventing fraud, understanding centralization of funding (and so correlated risk) and allowing people to better understand the funding ecosystem!
FWIW I was delaying engaging with recent proposals for improving EA, and I really appreciate that Nathan is taking the time to facilitate that conversation.
Every EA-affiliated org should clearly state in their website their sources of funding that contributed over >$100k.
¡Hola! Te recomiendo que te unas al Slack de la comunidad hispanohablante a través de este link.
I can confirm I have access to coauthored post analytics! Great work dev team!
Props to wayne for providing regular and consistent updates to his beliefs, that's actually pretty amazing
Is more like:
- I am talking about the LatAm community because this is the community I am familiar with
- I don't have great insight into the grantmaker case in specific. I suspect they are overvaluing general community-building work over cause-specific work, which I think is a reasonable thing to disagree on.
- While the subjects of the post have been repeatedly discouraged (by the grantmakers and others) to do cause-specific work in LatAm, they have come to interact and meet other individuals from UK/US who lack expertise in the topic who were encouraged and supported to do cause-specific work in LatAm (by different funders, I believe).
I conjecture (but do not claim) that people in US/UK are better connected and have more opportunities for encouragement and funding compared to people in LatAm. If the people encouraging the US/UK people met these LatAm people, I think they would agree they are better prepared to do it (since they have cause-specific expertise and local knowledge).
Basically, yes, though:
- They wanted to do a mixture of "original research" and "community building specifically focused on their area of expertise"
- The grantmaker didn't explicitly say they were a bad fit for it, so it could be construed as inquiring about their theory of impact. A charitable interpretation is that the grantmaker put the grant on hold because they thought the would-be grantee was tackling too many tasks simultaneously, or because of external factors (e.g. FTX) that were not clearly communicated.
- A similar scenario has happened other times with other people. I highlighted this because it left a written record behind so it was easier for me to understand what happened and write about it, even though I don't think it's a good central example.
I meant area-specific (as in eg biosecurity projects) in Latin America
In this case, a mixture of developing research, getting involved in existing initiatives and doing community building for two specific cause areas they have certified expertise in.
This as opposed to eg arranging a translation for the Precipice, evangelizing and running events for the core ideas of Effective Altruism.
For example imagine that the people I mentioned intended to work on AI safety and biosecurity in the sidelines while doing community building work.
FWIW I'm one of the future users of this project and regularly chatting to this team.
My use case is for research, eg validating this approach with empirical data .
I expect this database will be useful in the future as a benchmark to test similar approaches, and the program probably justifies its (low) costs in those grounds alone.
Sounds great, thank you Zoe!
I thought that the point was to help with active reading and little more.
Cold Takes has a pretty good summary of arguments for <50 years timelines.
Separately from the FTX issue, I'd be curious about you dissecting what of Zoe's ideas you think are worth implementing and what would be worse and why.
My takes:
- Set up whistleblower protection schemes for members of EA organisations => seems pretty good if there is a public commitment from an EA funder to something like "if you whistleblow we'll cover your salary if you are fired while you search another job" or something like that
- Transparent listing of funding sources on each website of each institution => Seems good to keep track of who receives money from who
- Detailed and comprehensive conflict of interest reporting in grant giving => My sense is that this is already handled sensibly enough, though I don't have great insight on grantgiving institutions
- Within the next 5 years, each EA institution should reduce their reliance on EA funding sources by 50% => this seems bad for incentives and complicated to put into action
- Within 5 years: EA funding decisions are made collectively => seems like it would increase friction and likely decrease the quality of the decisions, though I am willing to be proven wrong
- No fireside chats at EAG with leaders. Instead, panel/discussions/double cruxing disagreements between widely known and influential EAs and between different orgs and more space for the people that are less known => Meh, I'm indifferent since I just don't consume that kind of content so I don't know the effects it has, though I am erring towards it being somewhat good to give voice to others
- Increase transparency over
- Who gets accepted/rejected to EAG and why => seems hard to implement, though there could be some model letters or something
- leaders/coordination forum => I don't sense this forum is nowhere as important as these recommendations imply
- Set up: ‘Online forum of concerns’ => seems somewhat bad / will lead to overly focusing on things that are not that important, though good to survey people on concerns
I am so dumb I was mistakenly using odds instead of probs to compute the brier score :facepalm:
And yes, you are right, we should extremize before aggregating. Otherwise, the method is equivalent to geo mean of odds.
It's still not very good though

Thanks Jonas!
- I'd forgotten about that great article! Linked.
- I feel some of these would be good bachelor / MSc theses yeah!
It would, however, send a credible signal that the EA community does not benefit from fraud, and create an incentive to 1) scrutinize better future donors and 2) to not engage in fraud for the sake of the community.
Without more context, I'd say that fit a distribution to each array and then aggregate them using a weighted linear aggregate of the resulting CDFs, assigning a weight proportional to your confidence on the assumptions that produced the array.
Depends on whether you are aggregating distributions or point estimates.
If you are aggregating distributions, I would follow the same procedure outlined in this post, and use the continuous version of the geometric mean of odds I outline in footnote 1 of this post.
If you are aggregating point estimates, at this point I would use the procedure explained in this paper, which is taking a sort of extremized average. I would consider a log transform depending on the quantity you are aggregating. (though note that I have not spent as much time thinking about how to aggregate point estimates)
Some cool people from the Spanish-Speaking community:
- The coordinator Sandra Malagón, who in the space of one year has kickstarted an EA hub in Mexico and helped raise a community in Chile and Colombia.
- Pablo Melchor, founder of Ayuda Efectiva, the Spanish GiveWell
- Melanie Basnak, senior research manager at Rethink Priorities
- Juan García, researcher at ALLFED, who works in food security
- Ángela María Aristizábal, researcher at FHI, who works in GCRs and community building
- Pablo Stafforini, who built the EA Forum Wiki, is involved in many cool projects and has been involved since the very beginning of EA
- Michelle Bruno, an early career person who works now in community building in Mexico and in a biosecurity project
- Jaime Fernández who works in community building in Colombia and is researching some philosophy topics
- Laura González, who co-coordinates the Spanish speaking community and leads the Spanish translation project.
Well, time-travel machines are a type of hardware... 👅
Brilliant! I found this a really good introduction to some of the epistemic norms I most value in the EA community.
It's super well written too.
PRT gives moral theories greater influence over the particular choice situations that matter most to them, and lesser influence over the particular choice situations that matter least to them.
That seemed like the case to me.
I still think that this is too weak and that theories should be allowed to entirely give up resources without trading, though this is more an intuition than a thoroughly meditated point.
then there's not really any principled reason to rule out trying to take into account allocations you can't possibly affect (causally or even acausally), e.g. the past and inaccesible parts of the universe/multiverse, which seems odd
I don't understand why 1) this is the case or 2) why this is undersirable.
If the rest of my community seems obsessed with IDK longtermism and overallocating resources to it I think it's entirely reasonable for me to have my inner longtermist shut up entirely and just focus on near term issues.
I imagine the internal dialogue here between the longtermist and neartermist being like "look I don't know why you care so much about things that are going to wash off in a decade, but clearly this is bringing you a lot of pain, so I'm just going to let you have it"
I think this would undermine risk-neutral total symmetric views, because a) those are probably overrepresented in the universe
I don't understand what you mean.
it conflicts with separability, the intuition that what you can't affect (causally or acausally) shouldn't matter to your decision-making
Well then separability is wrong. It seems to me that it matters that no one is working on a problem you deem important, even if that does not affect the chances of you solving the problem.
other main approaches to moral uncertainty aren't really sensitive to how others are allocating resources in a way that the proportional view isn'
I am not familiar with other proposals to moral uncertainty, so probably you are right!
(Generally I would not take it too seriously what I am saying - I find it hard to separate my intuitions on values from intuitions about how real world operates, and my responses are more off-the-cuff than meditated)
TL;DR, so this might be addressed in the paper
FWIW my first impulse when reading the summary is that proportionality does not seem particularly desirable.
In particular:
- I think it's reasonable for one of the moral theories to give up part of their alloted resources if the other moral theory believes the stakes are sufficiently high. The distribution should be stakes sensitive (though inter-moral theory comparisons of stakes is something that is not clear how to do)
- The answer does not seem to guide individual action very well, at least in the example. Even accepting proportionality, it seems that how I split my portfolio should be influenced by the resource allocation of the world at large.
The Stack Overflow case [1] that Thomas linked to in another comment seems a good place to learn from.
I think multiple license support on a post-by-post basis is a must. Old posts must be licensed as all-rights-reserved, except for the right of publication on the Forum (which is understood that the authors have granted de facto when they published).
New posts can be required to use a particular license or (even better) users can choose what license to use, with the default being preferably CC-BY per the discussion on other comments.
The license on all posts should be ideally updatable at will, and I would see it as positive to nudge users to update the license in old posts to CC-BY (perhaps sending them an email or a popup next time they log in that gathers their explicit permission to do so).
To be clear, the thing that made me feel weird is the implication that this would be applied retroactively and without explicit consent from you each user (which I assume is not what was meant, but it is how it read to me).
I'm perfectly fine with contributions going forward requiring a specific license as in arXiv (preferably requiring a minimal license that basically allows reproduction in the EA Forum and then having default options for more permissive licenses), as long as this is clearly explained (eg a disclaimer below the publish button, a pop-up, or a menu requiring you to choose a license).
I am also fine applying this change retroactively, as long as authors give their explicit permissions and have a chance before of removing content they do not want to be released this way.
Epistemic status: out of my depth
- The license should be opt-out (in fact I don't think you can legally force a license on the content created by authors without their explicit consent?)
- CC-BY would be a much better default choice. Commercial use is an important aspect of truly open source content.
- Even better to offer multiple license options on posts, so people can tailor it to their needs. I'm a big fan of how this is handled for example in arXiv or GitHub, with multiple options.
I notice I had a hair-raising chill when reading this part:
we are planning to make Forum content published under a Creative Commons Attribution-NonCommercial license
This made me feel as if you were implying to be owners of the content in the Forum, which you are not - the respective authors are.
I believe that what you were trying to convey is:
We plan to add an opt-out option for authors to release future content under a XX license
There is also the question of how to handle past content.
The simplest option would be to leave everything with their default option (which for posts without an explicit license would be all-rights-reserved under current copyright law), but add the possibility for authors to change the license manually.
A more cumbersome option, but that might help with increasing the availability of content, is some sort of pop-up asking for explicit permission to change all past content of current users to CC-BY, though I imagine that can be more work to implement and not clearly worth it.
You can do this easily enough with external tools. I use the stayfocused plugin on Chrome for this.
Having a TL;DR box at the beginning of the posts sounds amazing