As has been noted by other writers, the EA (Effective Altruism) movement has a pretty strong deference culture. In many ways this makes sense as lots of EAs come from a background of reading about and being compelled by organizations like GiveWell. These organizations are built on the assertion that charity evaluation is hard and benefits from a full-time team doing it. A typical donor has the most impact by deferring to this research. This culture of deference has become pretty strong in EA, I pretty frequently have conversations with highly involved EAs who are still deferring on major topics (e.g. cause area, career choice) without investigating them personally.
On the one hand, it is impossible to be an expert in everything, and making hard decisions around doing good is no different. On the other hand, a culture that is too high in deference or defers using the wrong metrics can become homogenized and lead to great opportunities being missed. Often, pieces written about deferral end up falling on the “you should defer more [EA · GW]” side or the “you should defer less [EA · GW]” side.
I think the optimal perspective on the right amount of deferral depends on background expertise and expected time. (I don’t think any previous writers would disagree with this, but their posts are typically not interpreted in that way.) If you want to become a tinker and put thousands of hours into learning about cars; this puts you in a different position than someone who drives but has never looked under the hood, in terms of deference to a mechanic. The same piece of advice; ‘defer less’ or ‘defer more’, would not be equally applicable to those two demographics. One might defer too little (e.g. the person who has never opened up their hood being pretty confident they can fix something) another might defer too much (the tinker being disappointed to learn that the local mechanic knows far less than expected about a rare engine part).
Another component of this is discerning which questions are more objectively answerable vs ones which are based on values or unclear epistemic trade-offs. To use GiveWell as an example; if you want to save the most lives possible with a high degree of confidence, one of their top choices in fighting malaria is a really strong bet, and deferring to their research is advisable. However, they are far less confident about their trade-offs between income and lives saved and thus it makes less sense to defer on that topic.
So when does it concretely make sense to defer in EA? Let's examine some clear examples on either side and then work our way to more ambiguous cases.
High deference - New EA
John is brand new to EA and has read a single book on the topic. Although he loves the concepts, he feels overwhelmed with all the new information and does not plan on engaging with it super deeply. He is already well into a solid career and does not imagine EA becoming a big part of his life. Nonetheless, he wants his donations to make the maximum impact from a fairly standard view of saving more lives and reducing pain. He defers to the EA community and ends up donating 10% to GiveWell recommended charities, seeing it as a safe, impactful option that does not take a ton of time.
I think in this case John has made the ideal call; he has made an optimal decision given the amount of time and energy he wants to put into the topic. But let's look at the same amount of deference with a much more involved EA.
High deference - Experienced EA
Sally has been involved in the EA movement for a number of years, she led her local university chapter for a couple of years before joining an EA organization full time. She has spent several hundred hours engaging with EA content, and has a pretty deep understanding of where the cruxes of disagreements are between EAs. However, when it comes to donating she still feels uncertain. She sees problems with the movement and its granting, and has knowledge of some unique opportunities that most EAs are not aware of. She puts in several dozen hours to investigate a couple of opportunities. However, she also knows that the full-time grantmakers are even more experienced in this area and likely have access to even more information. She thus decides to donate evenly between the EA funds, deferring that they will ultimately have better judgment than her.
Although this is close to the same outcome in terms of the level of deferral, this seems like a real loss to me. Sally fits the profile of someone who could be a helpful grantmaker if they had just happened to get another job, and likely would be able to have far more impact independently considering opportunities to find the best one. She is like the tinker in the car example above. In addition, the judgment calls that are made by the EA funders are considerably more sensitive to values than John's rough alignment with GiveWell. Sally might decide to fund one of the funds fully after considering the debate between cause areas, or donate specifically to an unnoticed opportunity that might be missed by larger grantmakers.
A central claim here is that someone's deference should reduce as they become more knowledgeable in an area. Someone who has been working full-time in EA for years should probably take the time to thoroughly think through their cause prioritization. Someone who is going to pick a career primarily based on impact should likely do enough research that they have a good sense of the options, not just pick something from the top of a list. Let's look at some examples of questions where it might make sense to use an informed view rather than deferring, as your experience in the EA movement goes up. Similar to my room for more funding post [EA · GW], I do not expect this table to be perfectly accurate or cross-applicable. I do think it’s a more helpful guide or frame of reference than the more generic “everyone should defer more” or “everyone should defer less” advice. In this table; when something is in the “areas to investigate” column, the action would involve looking at the original sources of arguments and best critiques (e.g. in the first case I think it would be reading some of GiveWell content, spot-checking a few of their assumptions, looking up critics of GiveWell, and looking up the other big charity evaluators to see their differences). I do not mean just asking their local EA chapter leader “is GiveWell the best charity evaluator.” That would really just be a different deferral, and I am suggesting direct consideration would be valuable.
Example choices to defer
Example choices to investigate
An EA who has read one book and has put in ~1 hour or less a week for under a year
What are the best specific charities?
Is GiveWell the best charity evaluator?
An EA who has read three books on the topic and been involved in a chapter for one-two years
An EA who has led a chapter for two years and worked at an EA org. for one
What should a specific organization's plan be?
What are my ethical and epistemic tradeoffs?
An EA who has been working full time in EA and considering meta-issues for years
Sub comparison between charities doing similar work (e.g. AMF vs Malaria Consortium)
What are the biggest weaknesses of current EA views and how should my actions change based on that?
This table shows the evolution of how as someone gains more expertise in an area they should defer less and less, particularly on topics that might be value sensitive or that relatively few EAs are considering independently. It’s also worth noting that EA is a young movement and there are likely lots of things that the movement as a whole is missing. If we have a culture of deference that means there are a relatively small number of people who need to notice these gaps. If we have more informed independent thinkers however, gaps can be noticed that would otherwise be missed. There are lots of reasons why a high deferral community might create bad norms [EA · GW].
Overall I think EA would benefit from a more spectrum based understanding of deferral; with specific questions and levels of knowledge (like the table above) being the factors discussed, instead of overall views or vague claims about when and when not to defer.
EA has a high deference culture? Compared to what other cultures? Idk but I feel like the difference between EA and other groups of people I've been in (grad students, City Year people, law students...) may not be that EAs defer more on average but rather that they are much more likely to explicitly flag when they are doing so. In EA the default expectation is that you do your own thinking and back up your decisions and claims with evidence*, and deference is a legitimate source of evidence so people cite it. But in other communities people would just say "I think X" or "I'm doing X" and not bother to explain why (and perhaps not even know why, because they didn't really think that much about it).
*Other communities have this norm too, I think, but not to the same extent.
EA has a high deference culture compared to the epistemic norms it claims to adhere to = compared to the standards it aspires and claims to follow, I'd say. This can be true independently of the difference with other groups of people that you described (which I think is also a true description).
tl;dr, I think deference is more concerning for EA than other cultures. Relative to how much we should expect EAs to defer, they defer way too much.
1) We should expect EA to have much less deference culture than other cultures, since a lot of EA claims are based on things like answers to philosophical questions, long term future predictions, ect. These kinds of things are really hard to answer, and I don't think it's the case that most experts have a much better shot at answering these than some relatively smart and quantitative University students. Questions about moral philosophy are the exact kinds of questions you expect to have a super wide range of answers to, so the number of EAs that claim they're longtermist is kind of surprising and unexpected. I think this is a sign there's more deference than their should be.
On the other hand, for more concrete and established scientific fields where experts do have a much better chance at making decisions than students, it makes way more sense to defer to them about what things are important.
2) EAs are optimizing for altruism, so decisions on what to work on require lots of thought. I'm guessing most non-EA people choose to work on things they enjoy or are emotionally invested in.
I can easily tell you, without any evidence or deference, what things I thing are fun and am emotionally invested in. But it takes a lot more time and research to come up with what I think is the most impactful.
I think EAs having more evidence and reasoning to back up what we're working on just naturally arises from being an EA, and doesn't necessarily mean we have better epistemics than other communities.
3) Explicitly saying when you're deferring to someone seems like it does a better job of convincing people "wow! these EA people seem more correct than most other communities" and does a worse job at actually being more correct than most other communities. Being explicit about when we defer to people still means we might defer way too much.
4) Edit: I think this point is not actually about deference. Also, I know very little about MIRI and have no idea if this is in any way realistic. I'm guessing you could replace MIRI with some other org and this kind of story would be true, but I'm not totally sure.
Also, idk I feel like some things that look like original, detailed thinking actually just ends up being closer to deference than I'd like. I think perhaps a story that's happened before is "MIRI researcher thinks hard about AI stuff, and comes up with some original thoughts with lots of evidence. Writes on alignment form. Tons of karma, yay."
Sure, the thinking is original, has evidence to back it up, and looks really nice, pretty, and useful. That being said, even if this is original thinking, I'm guessing if you looked at how this person was using the opinions of other people to shape their own opinions, it would look like
Talking to other MIRI people - 80%
Talking to non-MIRI EAs - 10%
Reading books/opinions written by non-EAs relevant to what they're working on - 5%
Talking to non-EAs - 5%
So even if this thinking looks really original and intelligent, this still seems like a problem with deference. Not deferring to other MIRI researchers an unhealthy amount probably looks more like getting more insight from mainstream academia and non-EAs.
I guess the point here is that it's much easier to look like you're not deferring to people too much than to actually not defer to people too much.
5) I think people in general defer way too much and do not think hard enough about what to work on. I think EAs defer too much and occassionally don't think hard enough about what to work on. Being better than the latter doesn't really mean I'm satisfied with the former.
FWIW I agree that EAs should probably defer less on average. So e.g. I agree with your point 5.
I don't like the example you gave about MIRI -- I think filter bubbles & related issues are real problems but distinct from deference; nothing in the example you gave seems like deference to me. (Also, in my experience the people from MIRI defer less than pretty much anyone in EA. If anyone is deferring too little, it's them.)
Yeah you're right, it does seem separate, although sort of an adjacent problem? I think the larger problem here is something like "EA opinions are influenced by other EAs more than I'd like them to be". Over-deference and filter bubbles are two ways where I think getting too sucked into EA can create bad epistemics.
I didn't mean to call out MIRI specifically, and just tried to choose an EA org where I could picture filter bubbles happening (since MIRI seems pretty isolated from other places). I know very little about what MIRI work *actually* looks like. I'll change the original comment to reflect this.
My view is that when you are considering whether to take some action and are weighing up its effects, you shouldn't in general put special weight on your own beliefs about those effects (there are some complicating factors here, but that's a decent first approximation). Instead you should put the same weight on yours and others' beliefs. I think most people don't do that, but put much too much weight on their own beliefs relative to others'. Effective altruists have shifted away from that human default, but in my view it's unlikely - in the light of the general human tendency to overweight our own beliefs - that we've shifted as far in the direction of greater deference as we ideally should. (I think that it may not be possible to attain that level of deference, but it's nevertheless good to be clear over what the right direction is.) This varies a bit within the the community, though - my sense is that highly engaged professional effective altruists, e.g. at the largest orgs, are closer to the optimal level of deference than the community at large.
I won't be able to give you examples where I demonstrate that there was too little deference. But since you asked for examples, I'll point to some instances where my opinion is that there's too little deference.
Whether you think someone deferred too little or too much regarding some particular decisions will often depend on your object-level views on what's effective. In my view, quite a few interventions pursued by effective altruists are substantially less effective than the most effective interventions; and those who pursue those less effective interventions would normally increase their impact if they deferred more, and shifted to interventions that are closer to the effective altruist consensus. But obviously, readers who disagree with my cause priorities (i.e. longtermism, of a fairly conventional kind) may disagree with that analysis of deference as well.
Relatedly, one pattern that I've noticed is that people on the forum - including people who aren't deeply immersed in effective altruist thinking - criticise some longstanding effective altruist practices or strategies by arguments that are unconvincing to me. In such cases, my reaction tends to be that they should have another go and think "maybe they've thought more about this than I have - maybe there is something that I've missed?" More often than not, very smart people have thought very extensively about most such issues, and it's therefore unlikely that someone who has thought substantially less about them would be more likely to be right about them. I think that perspective is missing in some of the forum commentary. But again, whether you agree with my on this will depend on your view on the object-level criticisms. If you think these criticisms are in fact convincing, then you're probably less likely to believe that the critics should defer to the effective altruist consensus.
Thanks for the comment, I think this describes a pretty common view in EA that I want to push back against.
Let's start with the question of how much you have found practical criticism of EA valuable. When I see posts like this [EA · GW] or this [EA · GW], I see them as significantly higher value than those individuals deferring to large EA orgs. Moving to a more practical example; older/more experienced organizations/people actually recommended against many organizations (CE being one of them and FTX being another). These organizations’ actions and projects seem pretty insanely high value relative to others, for example, a chapter leader who basically follows the same script (a pattern I definitely personally could have fallen into). I think something that is often forgotten about is the extremely high upside value of doing something outside of the Overton window, even if it has a higher chance of failure. You could also take a hypothetical, historical perspective on this; e.g. if EA deferred to only GiveWell or only to more traditional philanthropic actors, how impactful would this have been?.
Moving a bit more to the philosophical side, I do think you should put the same weight on your views as other epistemic peers. However, I think there are some pretty huge ethical and meta epistemic assumptions that a lot of people do not realize they are deferring to when going with what a large organization or experienced EA thinks. Most people feel pretty positive when deferring based on expertise (e.g. “this doctor knows what a CAT scan looks like better than me”, or “Givewell has considered the impact effects of malaria much more than me”). I think these sorts of situations lend themselves to higher deference. Something like “how much ethical value do I prescribe to animals”, or “what is my tradeoff of income to health” are; 1) way less considered, and 2) much harder to gain clarification on from deeper research. I see a lot of deferrals based on this sort of thing e.g. assumptions that GiveWell or GPI do not have pretty strong baseline ethical and epistemic assumptions.
I think the amount of hours spent thinking about an issue is a somewhat useful factor to consider (among many others) but is often used as a pretty strong proxy without regards to other factors; e.g. selection effects (GPI is going to hire people with a set of specific viewpoints coming in), or communication effects (e.g. I engaged considerably less in EA when I thought direct work was the most impactful thing, compared to when I thought meta was the most important thing.). I have also seen many cases where people make big assumptions about how much consideration has in fact been put into a given topic relative to its hours (e.g. many people assume more careful, broad-based cause consideration has been done than really has been done. When you have a more detailed view of what different EA organizations are working on, you see a different picture.).
On the philosophical side paragraph - totally agree; this is why worldview diversification makes so much sense (to me). The necessity of certain assumptions leads to divergence of kinds of work, and that is a very good thing, because maybe (almost certainly) we are wrong in various ways, and we want to be alive and open to new things that might be important. Perhaps on the margin an individual's most rational action could sometimes be to defer more, but as a whole, a movement like EA would be more resilient with less deference.
Disclaimer: I personally find myself very turned off by the deference culture in EA. Maybe that's just the way it should be though.
I do think that higher deference cultures are better at cooperating and getting things done - and these are no easy tasks for large movements. There have also been movements that have done terrible things in the past, accidentally, with these properties. There have also been movements that have done wonderful things, with these properties.
I'd guess there may be a correlation between people who think there should be more deference being in the "row" camp and people who think less in the "steer" camp, or another camp, described here [EA · GW].
I worry a bit that these discussions become a bit anecdotal; and that the arguments rely on examples where it's not quite clear what the role of deference or its absence was. No doubt there are examples where people would have done better if they had deferred less. That need not change the overall picture that much.
Fwiw, I think one thing that's important to keep in mind is that deference doesn't necessarily entail working within a big project or or org. EAs have to an extent encouraged others to start new independent projects, and deference to such advice thus means starting an independent project rather than working within a big project or org.
I think there are several things wrong with the Equal Weight View, but I think this is the easiest way to see it:
Let's say I have O(H)=2:1 which I updated from a prior of 1:6. Now I meet someone who A) I trust to be rational as much as myself, and B) I know started with the same prior as me, and C) I know cannot have seen the evidence that I have seen, and D) I know has updated on evidence independent of evidence I have seen.
They say O(H)=1:2.
Then I can infer that they updated from 1:6 to 1:2 by multiplying with a likelihood ratio of 3:1. And because C and D, I can update on that likelihood ratio in order to end up with a posterior of O(H)=6:1.
The equal weight view would have me adjust down, whereas Bayes tells me to adjust up.
This post seems to amount to replying "No" to Vaidehi's question since it is very long but does not include a specific example.
> I won't be able to give you examples where I demonstrate that there was too little deference I don't think that Vaidehi is asking you to demonstrate anything in particular about any examples given. It's just useful to give examples that illustrate your own subjective experience on the topic. It would have conveyed more information and perspective than the above post.
Whether you should defer or not depends not only on your estimation of relative expertise but also on what kind of role you want to fill in the community, in order to increase the altruistic impact of the community. I call it role-based social epistemology, and I really should write it longly at some point.
You can think of the roles as occupying different points on the production possibilities frontier for the explore-exploit trade-off. If you think of rationality as an individual project, you might reason that you should aim for a healthy balance between exploring and exploiting due to potential diminishing returns to either one. But if you instead take the perspective of "how can I coordinate with my community in order to maximize the impact we produce?" you start to see why specializing could be optimal.
If you are a Decision-Maker, you're optimizing for allocating resources efficiently (e.g. money, work, power, etc.), and the impact of your allocation depends on how accurate your related beliefs are. And because accurate beliefs are so important to your decisions, you should opportunistically defer to people whenever you think they might have better information than you (Aumann-agreement [? · GW] style), as long as you think you're decently calibrated [EA(p) · GW(p)] and you're deferring to advice with sufficient bandwidth [EA · GW]. You should be Exploiting existing knowledge and expertise by deferring to it. But because you frequently defer to others, you may not be safe to defer to in turn due to potential negative externalities associated with information cascades that can be hard to correct.
If you are an Explorer, your job is to optimize for the chance of discovering important insights that can help the community make progress on important open problems. This is fundamentally a different project compared to just trying to acquire accurate beliefs. Now, you want to actively avoid [LW · GW]ending up with the same belief states as other people to some extent. Notice that the problems are still open, which means that existing tools and angles-of-attack may be insufficient for the task. Evaluate paradigms/approaches for how neglected they are. Remember, it doesn't matter whether you're right about what other people are right about as long as you are extremely right about what other people are wrong about. So if you want to maximize the chance that the community ends up solving the problem, you want to coordinate with other explorers in order to search separate parts of the idea-tree. What matters is that the right fruits are picked, not that you end up picking them. We're in a parallel tree search [LW(p) · GW(p)] paradigm, and this has implications for how we individually should balance the explore-exploit trade-off.
If you are an Expert/Forecaster, your job is to acquire accurate beliefs that are safe to defer to. If there's a difficult and important question (crucial consideration) for which better forecasts could marginally improve the careers/donations of a lot of people, this could be an important way to produce impact. Your impact here depends on the accuracy of your beliefs, so unlike the Explorer, you don't have strong reasons to avoid common belief states. Your impact also depends on how safe you are to defer to, because you can potentially do a lot of harm by reinforcing false information cascades. And these considerations are newcomblike [? · GW], so you should act by that rule which, when followed by the proportion of other experts you predict will follow it due to the same reasoning as you, maximizes community impact. Sometimes that means you want to report your independent impressions [? · GW], and sometimes that means you want to share and elicit likelihood ratios instead of posterior beliefs. A common failure mode here is to over-optimize for making your beliefs legible, which in extreme cases turns into a race to the bottom, and in median cases turns into myopic empiricism [EA · GW] where you predictably end up astray because you refuse to update on a large class of illegible (but Bayesian) evidence.
The limiting case of a Decision-Maker always reporting their independent impressions is (roughly) an Expert. But only insofar as it's psychologically feasible to maintain a long-term separation between independent and all-things-considered impressions, and I have my doubts.
What kind of knowledge-work you want to do depends not only on your comparative advantages but also on your model of how the community produces altruistic impact. If on your model community impact is marginally bottlenecked by insights [LW(p) · GW(p)], you should probably consider aiming for ambitious insight-production. If on the other hand, you think you can have more impact by contributing to marginally better forecasts about what problems are most important to work on, maybe consider aiming for producing deference-safe predictions. And if you just happen to have a bunch of money lying around, you just don't have the luxury of recklessly diverging from expert consensus, and you should use everything in your toolbox to make sure you're allocating them efficiently.
No one is pure any of these. The roles are separated by what optimization criteria they use, and you optimize for different things in different areas of your life, and over your lifetime. But I think it's usefwl to carve out the roles, so you can notice when you need to put which hat on, and the different things that implies for how you should play.
Thanks for detailing this aspect of EA. I think much of the deference culture is driven by early EA orgs like GiveWell, as you mention. There is a tendency to map the strong deference that GiveWell merits in global health onto other cause areas where it may not apply. For instance, GWWC recommends giving to several funds in different cause areas. The presentation suggests that funds are roughly equal in quality for their respective cause areas. GiveWell has about ~5x more staff than Animal Charity Evaluators, ~10x+ more staff compared to the EA Infrastructure Fund, and ~10x+ more staff compared to Founders Pledge's climate team. To the extent that a larger team size means more research hours and more research hours means better funding decisions, there is a significant difference in funding quality among the different fund recommendations. This difference isn't communicated in public-facing EA media like the GWWC webpage and videos.
As someone who is an expert in a cause area where the EA fund has comparatively little analytical capacity (climate change mitigation), I find the deference and marketing of the climate fund as the most effective giving option a continual source of frustration. I've written about that here [EA · GW] and here [EA · GW]. I'm also worried about people mapping weak deference onto causes where they should have greater deference: many people early in their EA engagement care about climate change as a cause area [EA · GW]. If they have some level of expertise, they may find the climate fund recommendations underwhelming and then incorrectly assume funds in other causes areas have similarly low levels of research behind them. There may be some attrition in getting more people more involved in EA because of this, though it is a tiny niche. I don't think the answer to the comparative deference problem is to do something like delist fund options from the GWWC page. But we do need some way to communicate the differential level of rigor.
I like that you contrast deference with investigation, rather than unilaterism. So many discussions and posts about deference devolve into discussion about unilateralism. Example: https://forum.effectivealtruism.org/posts/Jx6ncakmergiC74kG/deference-culture-in-ea?commentId=epR5HxT6nkdSCtMCf
But arguments against unilateralism can't be applied as arguments against investigation. Investigation grows the intellectual commons. Empirically it's clear there is much to investigate. EAs generally agree that AI risk is the most important problem yet there is no plan to move forward (aside from help out OpenAI and hope that this somehow turns out to be a good idea instead of an apocalyptic one).
I like this breakdown a lot. Another related reason for deferring less and building your own inside view is for figuring out your career within a field.
Choosing research questions, deciding which roles and orgs to apply to, finding role models and plotting a career trajectory, and proposing new projects can be parts of your job in just about any field, and it’ll be hard to do them well if you’re constantly deferring to experts. On niche topics, it’s even difficult to learn who the experts are and what they believe.
Personally I’ve deferred to 80,000 Hours on which high-level cause areas offer the highest potential for impact. But after spending a few months to years learning about a single cause area, I feel much less clueless about the field and have a real inside view.
"the EA (Effective Altruism) movement has a pretty strong deference culture."
Is this some kind of demographic thing? I haven't noticed it except in terms of college students/recent grads being a bit too attached to the idea of working for EA orgs. I defer when I don't feel like I have the appropriate knowledge and can't acquire it in reasonable time, and don't otherwise.
As someone who was a solo-EA, without knowing there was a whole EA movement, for well over a decade, it's really nice to be able to rely on other people's judgment sometimes instead of having to analyze every little thing for myself. But that deference comes from some intuitive sense of cost-benefit tradeoffs involved in investing my time to dive deeper into something, not from a general idea that I should be deferential, and it goes away the moment I sense that the cost-benefit analysis has flipped. And I don't feel like some kind of outlier for doing this. Another EA once called me an SBF bootlicker just for supporting Carrick Flynn, for example.