Stanford Social Innovation Review issued a similar debate - Should Foundations Increase Their Payouts During Big Crises? Most respondents were favorable to foundations increasing their spending, for reasons analogous to the ones discussed in this post - against Larry Kramer's keystone article on this series. Palfrey's answer has been cited in Vox's Future Perfect.
And if you allow me a conjecture, I wonder if the observed increase in altruistic behavior in collective decision making could be explained by voters applying some non-causal decision theory (either EDT or FDT or whatever) when it comes to elections and social norms.
Btw, what really caught my attention in this reference, more than the "success of democracies relative to autocracies" (which seems sort of assumed by the model), was that other factors (such as income inequality and education) may have an impact, too.
Thanks for this great post. I really appreciated both papers.
However, they made me think about the anti-populist literature in economics (some technocratic checks on majority rule are usually well accepted for fiscal and monetary policies), political science and philosophy - like the Federalist Papers, or, more recently, Garrett Jones's 10% Less Democracy.
Of course, I'm pretty sure democracy is better for the unrepresented than individual decisions made in a market, even if you have some altruistic actors advocating for selfless considerations... but I'm still quite puzzled about under what conditions does collective deliberation bend towards (or away from) altruistic or long-term reasoning.
'Good' news: as expected, as real interest rates fall, so do SDR, increasing the social cost of carbon. (not novelty, ok, but monetary policy-makers explicitly acknowledging it seems to be good) Bad news: of course, it still seems to be higher than a normative SDR based on time-neutrality.
You nailed it - Aasimov's and Cixin Liu's classics should be almost compulsory reading. However, it caught my eye you call Cixin Liu's trilogy the Dark Forest Trilogy, instead of referring to it as something likeThe 3-body problem books or Trisolarian Trilogy or Remembrances of Earth's Past. What I enjoy most in these books is the challenge of maintaining something like long-term cooperation. To such a list I'd add something like The Ministry for the Future (someone should add a good review to this forum); but though it has wonderful passages, sometimes it's irrealistic optimistic (or even simplistic, along the lines "capitalism is evil") and takes a lot for granted.
Guys, great post and discussion. I was taking a look at the discussion about Hekla's role... even if the eruption succeeded the breakdown of those civilizations by half a century, it'd likely have an effect concerning their prospects for recovery.
First, of course, thanks, C Tilli, for the post, and thanks willbradshaw for these comments. This pierced my mind:
As you say, I'm not sure EA will ever be as comforting as religion – it's optimising for very different things. But over time I hope we will generate community structures and wisdom literature to help manage this tension, care for each other, and create the emotional (as well as intellectual) conditions we need to survive and flourish.
I think my background is the opposite of C Tilli's: I have been an atheist for many years (and still am - well, maybe more of an agnostic, since we might be in a simulation...), but since I found out about EA, I think I became a little bit more understanding towards not only the need for comfort, but also the idea of valuing something that goes way beyond one's own personal value and social circle, that is sought by religious people (on the other hand, I also became a little bit supicious of some cult-like traits we might be tempted to mimic).
I am sort of surprised we wrote so much, so far, without talking about death and mortality. I know I have intrinsic value, but it's fragile and perishable (cryonics aside); and yet, the set of things I can value extends way beyond my perishable self - actually, my own self-worth depends a little bit on that (as Scheffer argues, it'd be hard not to be nihilistic if we knew humanity was going to end after us), and there's no necessary upper bound for what I can value. I reckon that, as much as I fear humanity falling into the precipice, I feel joy by thinking it may continue for eons, and that I may play a role, contribute and add my own personal experience to this narrative.
I guess that's the 'trick' played by religion that might be missing here: religion 'grants' me some sort of intrinsic value through some metaphysical cosmic privilege (or the love of God) - and this provides us some comfort. But then, without it, all that is left, despite enjoyable and worthy, is perishable - transient love, fading joy, endured pain, limited virtue, pleasure... Like Dworkin (who considered this to be a religious conviction - though non-theistic), we can say that a life well-lived is an achievement in itself, and stands for itself even after we die, like a work of art - but art itself will be meaningless when humanity is gone. Maybe altruism is just another way to trick (the fear of) death: when one realizes that "All those moments will be lost in time, like tears in rain. Time to die" one might see it not as realizing some external value, but as an important part of one's own self-worth. (if Bladerunner is too melodramatic, one can use the bureaucrat in Ikiru as an example of this reasoning)
For whatever reason people who place substantial intrinsic value on themselves seem to be more successful and have a larger social impact in the long term. It appears to be better for mental health, risk-taking, and confidence among other things.
I think this is still an instrumental reason for someone to place "substantial intrinsic value on themselves." Though I have no problem with that, I thought what C Tilli complained about was precisely that, for EAs, all self-concern is for the sake of the greater good, even when it is rephrased as a psychological need for a small amount self-indulgence. Second, I'd say that people who are "more successful and have a larger social impact in the long term" are "people who place substantial intrinsic value on themselves,” but that's just selection dynamics: if you have a large impact, then you (likely) place substantial intrinsic value on yourself. Even if it does imply that you’re more likely to succeed if you place substantial intrinsic value on yourself (if only people who do that can succeed), it does not say anything about failure – confident people fail all the time, and the worst way of failing seems to be reserved for those who place substantial value on themselves and end up being successful with the wrong values.
But I wonder if our sample of “successful people” is not too biased towards those who get the spotlights. Petrov didn’t seem to put a lot of value on himself, and Arkhipov is often described as exceptionally humble; no one strives to be an unsung hero.
Though I agree that the marginal utility of income drops a lot after some threshold, and I am not sure about how long people take to adjust their lifestyles to a drop in income, I would like to see a study taking into account the effects of wealth, savings and uncertainty. So yeah, maybe you'll be equally happy if you earn 75k or 100k, but in the latter you'll be better hedged against risks and be able to get additional utility by investing in someone else's welfare (your relatives, or donations).
Thanks for the post. Coincidentally, I was thinking about how I have a strong moral preference for a longer timeline when I saw it. I feel attracted by total total utilitarianism, but suppose we have N individuals, each living 80y, with the same constant utility U. Now, these individuals can either live more concentrated (say, in 100y) or more scattered (say, in 10000y) in time; I strongly prefer the latter (I'd pay some utility for it) - even though it runs against any notion of (pure) temporal discount. My intuition (though I don't trust it) is that, from the "point of view of nowhere", at some point, length may trump population; but maybe it's just some ad hoc influence of a strong bias against extinction. Please, let me know about any source discussing this (I admit I didn't search enough for it).
Thanks for this clarifying comment. I see your point - and I am particularly in agreement with the need for evaluation systems for cross-species comparison. I just wonder if a scale designed for cross-species comparison might be not very well-suited for interpersonal comparisons, and vice-versa - at least at the same time. Really, I'm more puzzled than anything else - and also surprised that I haven't seen more people puzzled about it. If we are actually using this scale to compare societies, I wonder if we shouldn't change the way welfare economists assess things like quality of life. In the original post, the Countries compared were Canada (Pop: 36 mi, HDI: .922, IHDI: .841) and India (Pop: 1.3 bi, HDI: .647, IHDI: .538)
Finally, really, please, don't take this as a criticism (I'm a major fan of CE), but:
We are not evaluating hunter gatherers, but people in an average low-income country. Life satisfaction measures show that in some countries, self-evaluated levels of subjective well-being are low. (Some academics even think that this subjective well-being could be lower than those of hunter gatherer societies.)
First, I am not sure how people from developing countries (particularly India) would rate the welfare of current humans vis-à-vis chimps, but I wonder if it'd be majorly different from your overall result. Second, I am not sure about the relevance of mentioning hunther-gatherers; I wouldn't know how to compare the hypothetical welfare of the world's super predator before civilization with current chimps with current people. Even if I knew, I would take life expectancy as an important factor (a general proxy for how someone is affected by health issues).
Thanks for this. I'm really glad for this milestone, and super proud to be part of it - tbh, it changed my life. I'd like to see something about trends by year. I remember having read some people concerned that the quantity of new members was decreasing. Maybe, together with other info (e.g., from EA survey), we could have an idea about how EA as a whole tends to evolve.
Thanks. I'm glad to see I wasn't profoundly misunderstanding it. Now, I think this is a very important issue: either there's something really wrong with Charity Entreneurship assessment of welfare in different species, or I will really have to rethink my priorities ;)
Maybe I didn't understand it properly, but I guess there's something wrong when the total welfare score of chimps is 47 and, for humans in low middle-income countries it's 32. Depending on your population ethics, one may think "we should improve the prospects in poor countries", but others can say "we should have more chimps." Or this scale has serious problems for comparisons between different species.
I wonder if exchange rates volatility during global recessions (usually, the US$ dollar and the Euro rise in relation to national currencies in developing countries) would add another point, at least for charities located in the developing world. (Personally, since my job is very stable and opportunities for investments scarce, I have been increasing my own donations to account for my declining consumption)
Nice post! I'd really like to see more on how fiction might publicize an idea and influence people - specially young ones.
And that's why I couldn't stop thinking about Terry Pratchett while I was reading this post; and I'm often surprised that this is not a such a salient common reference in this community. When I started reading HPMOR, I thought "Yudkowsky is doing to Rowling what Pratchett did to Tolkien etc." - and of course, Yudkowsky wrote some sort of elegy in the HPMOR blog on the day Pratchett died. You see, I can't avoid thinking I got here because, as a teenager, I wanted to read about comic fantasy, and then... I got "empathically entangled" with some characters which became role models like Dangerous Beans, Brutha, Vimes, Granny Weatherwax, even Death (at least in Hogfather). I think this might happen for some people (find some role model infiction), but not for everyone, of course.
Thanks for the post. I wonder if one the great gaps in education (at least for me), that prevents people from becoming more concerned about the longterm future, is the lack of emphasis on civilization collapses - as much as the lack of emphasis on the progress and the risks from the last 110 years.
I would agree that moral improvement is "easy", like saving +$100 or running more 100m might be easy, but moral excellence? Yeah, Khorton is totally right.
What I realize is that moral excellence is really hard not because of the reasons most people invoke to justify not striving for that ("selfishness is natural", "it's just signaling"), but because, to extend the comparison with mountain climbing, it's like climbing without never knowing where and when it will end.
Maybe hiking is a better metaphor. It's quite "easy & simple", but... Really, can you climb Aconcagua right now? Without prep? What if there are no maps,compass, GPS? Wouldn't you prefer to do it with others you can count on?
In that case an external person kicking your butt can be particularly useful, perhaps even more than in other situations. I think this butt kicking thing can be a way of acknowledging and avoiding your own biases and motivated reasoning to stay in harmful situations that stall your career.
This is true, but perhaps it'd not extrapolate so well for everyone - I can imagine the risk of making the butt-kicked person just feel even more pressured. But if you really master the Art of Butt-Kicking (I'd say "softly butt-kicking," but it sounds creepy), I see how this can go well ;)
Great post! We all know encouragement is often great, but I hadn't considered that it might be necessary or more effective in those specific situations. One of the things that caught my attention in your personal experience is that the person was a recent acquaintance. I wonder how friendship might insert other nuances into the process of butt-kicking; I mean, that's what friends are for, but they may end up being more protective (like "Hey, you're a great ukulule player, but maybe you should get your Master's first"), and maybe butt-kicked may end up discounting their feedback because of that ("Of course, you think I can do anything, look at your Christmas Card).
1. You should totally tell that girl (and maybe everyone else) about the drowning child, the real challenge is to find the best way to do that. Now, instead of emphasizing how having a significant other aligned with your goals might improve your prospects, I wonder how it affects your own personal happiness. People don't have to identify as EAs to support you or share your ultimate goals, but it sure helps; this might be demanding, as other people emphasized above, but actually the effect of your personal lifestyle is usually not so big, so you can compromise a little bit if your acquaintances do it, too. The real problem, in my opinion, is that you'll probably live way better if your significant other understands why something is important to you, instead of just accepting it as some sort of peculiar hobby. Now if that significant other loves you because of that...
Plus, the opposite is also true. You may fall in love with someone for their charm, wit & beauty, but passion fades; now if you're with someone because you love what they do and you can in some sense feel a part of it...
I'm definitively outside of my expertise here (I can only provide negative examples); I'd not say "Nuca Zaria: Effective Dating", but I'd advise young people to seriously entertain the idea that their choice of partners might be comparable (from a personal POV) to some decisions on career paths.
2. This problem extrapolates to friends, though in a milder way. I'm profoundly grateful to my EA friends for the way they make me feel comfortable. I've always felt sort of an outsider in my personal social life, but now, with other people, I'm often that guy who stops in the middle of a sentence to refrain from quoting The Precipice or shedding some tears for human suffering and dreams, etc. I don't want to be the one who lends EA a cult-like appearance.
3. I'd totally welcome EA tips on social life in general; not about how to be charming (that's useful, but I learned one trick or two), but focused on how to be happy with this. Besides my own welfare, I believe it could make me more effective; even if I'm not always trying to "convert" my acquaintances, I want to have a positive impact on / through them. Personally, sometimes I admit to my old friends - at least those who I think can sort of understand it - that I'm trying to "use" them to maximize something like general expected utility through our interactions. I don't think that's the optimal strategy, but it's hard to lie to smart friends, and I sort of see this as a higher form of friendship; so they might forgive my lame or cynical comments like "Wow, this wine is totally worth 20 bednets", or "Now you face Global Warming, the Red Dragon, Destroyer of Worlds; roll initiative."
4. MacAskill is just too handsome, it's counterfactually more effective to pick less dreamy characters. I'd be prefer Toby Ord, which sees the present as a more hingey moment.
Discussions over local vs. global remind me the contrast between the performances of two Give Directly programs, 100+ (cash tranfers for American families), which received US$ 114.3 mi, and Covid-19 Africa, which received US$ 53.7. I can see reasons for GD supporting 100+, and I'm not surprised that US$1 is more likely to be donated to poor Americans than to sub-saharian Africa, but this made me (and other people, of course, but I speak for me) wonder if we can draw a line between "we're using parochialism to promote EA-like goals" and "we're compromising with parochialism, diverting scarce resources and giving up effectiveness"? I don't think of this as a main issue, but as a puzzle; it would be interesting to have some research on public criteria or clues about this difference.
That’s the problem with freedom, in an advanced society. What can be done about it?
a. Targeted restrictions: The most natural thought is that we should tightly control just the really dangerous technologies, the ones that could be used to kill millions of people. So far, that’s worked because there aren’t that many such technologies (esp. nuclear weapons). It may not work in the future, though, when there are more such technologies. [...]
b. Defensive technologies: We’ll build defenses against the main threats. E.g., we’ll build defenses against nuclear weapons, we’ll engineer ourselves to resist genetically engineered viruses, etc. Problem: same as above; we may not be able to anticipate all the threats in advance. Also, defense is generally a losing game. It’s easier and cheaper to destroy things than to protect them. That’s why we have the saying “the best defense is a good offense”.
c. Tyranny/the End of Privacy: Maybe in the future, everyone will need to be closely monitored at all times, so that, if someone starts trying to destroy the world, other people can immediately intervene. Sam Harris suggested this in a podcast somewhere. Note: obviously, this applies as well (especially!) to government officials.
d. A better alternative . . . ?
Someone please fill in (d) for me. Thanks.
I don't think (c) works so better than the others. It implies a single-point-of-failure and bad incentives due to no accountability, besides the really hard problem of controlling everyone.
Transhuminsts would say (d) is super AGI, but that's basically (c) with more tech.
(Interplanetary civilization would possibly solve it... but as Huemer remarked, we're closer to destruction than to spreading through the galaxy)
Policy Action 11: Ensuring Responsibility, Accountability and Privacy 94. Member States should review and adapt, as appropriate, regulatory and legal frameworks to achieve accountability and responsibility for the content and outcomes of AI systems at the different phases of their lifecycle. Governments should introduce liability frameworks or clarify the interpretation of existing frameworks to make it possible to attribute accountability for the decisions and behaviour of AI systems. When developing regulatory frameworks governments should, in particular, take into account that responsibility and accountability must always lie with a natural or legal person; responsibility should not be delegated to an AI system, nor should a legal personality be given to an AI system.
I see the point in the last sentence is to prevent individuals and companies from escaping liability due to AI failures. However, the last bit also seems to prevent us from creating some sort of "AI DAO" - i.e., from creating a legal entity totally implemented by an autonomous system. This doesn't seem reasonable; after all, what is company if not some sort of artificial agent?
[epistemic status: very insecure, but I've been thinking about it for a while; there's probably a more persuasive argument out there]
I think you can easily extrapolate from a Kantian imperfect duty to help other to EA (but I understand peolpe seldom have the patience to engage with this point in Kantian philosophy); also, I remeber seeing a recent paper that used normative uncertainty to argue, quite successfully, that a deontological conception of moral obligation, given uncertainty, would end up in some sort of maximization. Other philosophers (Shelly Kagan, Derek Parfit) have persuasively argued that plausible versions of the most accepted moral philosophies tend to collapse into each other.
It'd be wonderful if someone could easily provide an argument reducing consequentialism, deonlogism and virtue ethics into each other. People could stop arguing like "you can only accept that if you're a x-utilitarian...", and focus on how to effectively realize moral value (which is a hard enough subject).
My own personal and sketchy take here would be something like:
To consistently live with virtue in society, I must follow moral duties defined by social norms that are fair, stable and efficient – that, in some way, strive for general happiness (otherwise,s ociety will change or collapse).
To maximize general happiness, I need to recognize that I am a limited rational agent, and devise a life plan that includes acquiring virtuous habits, and cooperating with others through rules and principles that define moral obligations for reasonable individuals.
To act taking Reason in me as an end in itself and according to the moral law, I need to live in society, and recognize my own limitations and my dependence on other rational beings, thus adopting habits that prevent vice and allow me to be recognized as a virtuous cooperator. To consistently do this, at least in scenarios of factual and normative uncertainty, implies acting in a way that can be described as restrictedly optimizing a cardinal social welfare function
Should donations be counter-cyclical? At least as a "matter of when" (I remember a previous similar conversation on Reddit, but it was mainly about deciding where to donate to). I don't think patient philanthropists should "give now instead of later" just because of that (we'll probably have worse crisis), but it seems like frequent donors (like GWWC pledgers) should consider anticipating their donations (particularly if their personal spending has decreased) - and also take into account expectations about future exchange rates. Does it make any sense?
I'd really love to know what other EA's think of it. I'm very unsure about how useful it is going to be, particularly since US left the organization in 2018. But it's the first Recommendation of a UN agency on this, the text address many interesting points (despite greatly emphasizing short-term issues, it does address "long-term catastrophic harms"), I haven't seen many discussions of it (except for the Montreal AI Ethics Institute), and the deadline is July 31.
I think people already do some of it. I guess the rhetorical shift from x-risk reasoning ("hey, we're all gonna die!") to lontermist arguments ("imagine how wonderful the future can be after the Precipice...") is based on that.
However, I think that, besides cultural challenges, the greatest obstacles for longtermist reasoning, in our societies (particularly in LMIC), is that we have an "intergenerational Tragedy of the Commons" aggravated by short-term bias (and hyperbolic discount) and representativeness heuristic (we've never observed human extinction). People don't usually think about the longterm future - but, even when they do it, they don't want to trade their individual-present-certain welfare for a collective (and non-identifiable), future and uncertain welfare.
I find DeepL more useful because, unlike Google Translate, I don't have to slice my text into 5k characters bits (though I often appeal to Google and Linguee when I want to check small excerpts). It has provided me with a better experience than Microsoft Word translation tool, too.
Sure, I added some remarks on how we used it to translate some EA-related material. But, honestly, it's basically a handy guide.
But it's hard for me to see how, you know, writing a treatise of human nature would score really highly in an EA oriented framework. As assessed ex-post that looked like a really valuable thing for Hume to do.
Actually, there's a lot of EAs researching philosophy and human psychology.
I think Collison's conception of EA is something like "GiveWell charity recommendations" - this seems to be a common misunderstanding shared by most non-EA people. I didn't check the whole interview, but it seems weird that he doesn't account for the contrast between what he had just said about EA and his comments on x-risks and longtermism.
Sorry, I should have been more clear: I think "treating attacks on common political knowledge by insiders as being just as threatening as the same attacks by foreigners" is hard to build support for, and may imply some risk of abuse.
There's even a specific term I can't recall for intentional changes in the environment that a social group would make to domesticate a landscape and provide services for future. It will take me some time to find it.
On the other hand, besides the specifics of strong longtermism, I guess that the conjugation of these ideas is pretty recent: a) concern for humanity as a whole, b) a scope longer than 150 years, c) the existence of a trade-off between present and future welfare, d) the balance is tipped in favor of the long-term. [epistemic status: just an insight, would take me too long to look for a counter-example)
I'dlike to have read this before having our discussion:
In other words, the same fake news techniques that benefit autocracies by making everyone unsure about political alternatives undermine democracies by making people question the common political systems that bind their society.
But their recommendations sound scary:
First, we need to better defend the common political knowledge that democracies need to function. That is, we need to bolster public confidence in the institutions and systems that maintain a democracy. Second, we need to make it harder for outside political groups to cooperate with inside political groups and organize disinformation attacks, through measures like transparency in political funding and spending. And finally, we need to treat attacks on common political knowledge by insiders as being just as threatening as the same attacks by foreigners.
I'm not sure I can help you, but I thank you for this post - it made me include ALLFED in my donation plans.
Should I give more than 10% this year, due to COVID-19
Well, it won't hurt anyone if you donate more than what you pledged for. I pondered on a similar issue, and have decided to donate to Covid-related charities what I've saved due to my decrease in consumption. It feels kind of "fair".
And there seem to be good arguments for mostly investing, letting interest compound, and giving a lot later (or setting up a trust or something to do so on one’s behalf).
Please let me know if you change your mind after reading Tramwell's argument. At least for me, in my home country, is very complex to invest in such a volatile scenarion. I'm probably biased here; I have already lost a significant portion of my savings (which was dumb, because I knew Covid was coming), and my first thought was "I should have given it all to AMF."
c) Something even broader, like Peter Turchin’s Secular cycles (or more accepted Kondratieff cycles, if you don’t like something resembling Harri Seldon’s psycho-history). These inequality-polarization-populism-conflict trend seem to be as old as Urukagina's rule.
2) Do you think the current issues in American universities are more comparable to the Cultural Revolution than to May ’68 in France (which led to social disruption) - or maybe other examples of student activism? This seems to be historically more common. A very important disanalogy between the Chinese Revolution was that it was perceived to be fueled by the Great Leader, which is not presently happening in any student activism I’m aware of.
I have the intuition that with a volatile dollar price it doesn't always make sense to donate to EA recommended charities and perhaps donors could allocate better their donations by donating locally
1. Actually, if you're from a poor country and use the current TLYCS calculator, you likely have to be rich for them to recommend you to donate a significant portion of your income.
2. I have mixed intuitions here, maybe someone could better disentangle them: a) if my currency vs. U$dollar exchange rate goes from 1:2 to 1:4, my donations apparently lose half of their value; b) however, if this movement is global (because exchange rates markets overvalue U$ dollar, due to the uncertainties caused by the pandemic), then probably the currency in the countries receiving aid will drop, too - so, on average, everything remains the same; c) due to recession, people donate less, thus saving money to donate later may have a cyclical effect.
EA recommends policy careers but I suspect that it's an even more important path in LMICs, where policies are weaker, policymakers are even less evidence based and where institutions have a lot more potential to improve.
I totally agree with that. But LMICS have their own peculiarities and serious governance issues; for instance, I haven't found 80kh advice on public policy that is applicable to someone beggining a civil service career in Brazil. It'd be probably impactful to find organizations with more local expertise.
I won't convince my friend's uncle to donate to Against Malaria but I could convince him to donate to a colombian charity
I don't know how much it scales, but in Brazil, Doebem offers to tranfer donations to GiveWell charities (AMF, GD and SCI), and also to Brazilian charities recognizedly transparent and that have had their impact previously evaluated by international researchers (though not with the same rigor of GW). Besides, they have experimented with direct transfers during the pandemic.
On the other hand, in LMICS, I think many people are often suspicious of local charities they don't have direct contact with, and might be more trustful of foreign recognized charities - with established reputations and rigorous evaluation. For example, when I talk about GD, people usually say "great idea"; but when I mention doedireto, I face all kinds of questions: "how can you ensure the money gets to the right person? or that they won't spend in drinks? etc." This is not unjustified, considering the bad rep the charity sector may have in some circles.
I wonder if there is a bias when EA talks about problems not being “neglected” enough when dismissing some cause areas or focus topics
2. This might lead to a selection bias - we'll end up focusing on projects that might be easier to evaluate; this is often compared to that joke where an economist searches for her keys under the lightpost because that's the only place she can see. I think most people working with charity evaluation in EA are aware of that; on the other hand, requiring no evidence would likely lead to bad incentives, and you still need some evidence to assess the opportunity costs of a project.
3. I actually think improving women participation in LMIC governments (and leading positions in general) would be a good cause precisely because (epistemic status: guess based on anecdotal experiences and some light readings on organizations and management) it would improve institutional decision-making (besides, of course, mitigating discrimination). It would be interesting to see a more profound assessment of this area.