Posts
Comments
Btw, I just noticed that the GCR Act is followed by Subsection B - Technological Hazards
Preparedness and Training that nobody is talking about...
And preceded by Sec. 7201-7211: Intragovernmental Cybersecurity Information Sharing Act and Sec. 7221-7228: Advancing American AI Act
Hi. I'll have to present WWOTF's first chapter to a class of philosophers and economists... I was wondering if someone has any ".pptx" about the book they'd be willing to share, pretty plz? 😅
Hi. I'll have to present WWOTF's first chapter to a class of philosophers and economists... I was wondering if someone has any ".pptx" about the book they'd be willing to share, pretty plz?
Thanks for this post.
Just one remark though:
The enactment of the Global Catastrophic Risk Management Act
This links to the original proposal. However, as explained by Matt Boyd, the bill that was passed (with some changes - such as placing responsibility over Homeland Security instead of the President) is part of the National Defense Authorization Act for Fiscal Year 2023 (p. 1290).
Thanks. Great post, btw. May I translate a part of it? and why don't you post it here on EA forum?
btw isn't this a reference to Hemingway's the Old man & the Sea?
I haven't reviewed other comments, yet, but this reminded me that many years ago President Lula said (I think during an interview or debate, while running for a second term) every fisherperson knows that, when one goes fishing, it's necessary to bring along the proper equipment - and a prepared meal or snack to sustain the time and effort the fishery might take. You don't fish hungry, much less starving.
I can't find the precise source because any google search gets full of other irrelevant materials - actually, there are too many sources linking Lula and his policies (Fome Zero, Bolsa Familia, etc.) to this proverb, and his team has often tried to reformulate it in ways like "we'll give fish PLUS teach fishing". But I think nothing trumps this old extended metaphor (and the way it was phrased really made it seem like the credit belongs to him - but that's the talent of a populist, to display the wisdom of the avg Joe).
Btw, "Lula" means "squid" in Portuguese, a delicacy for fisherpeople- maybe another evidence of nominative determinism?
Thansk for this. Right now, I just wanted to remark that I loved the prize here.
There've been many contests in the EAsphere awarding awesome financial prizes... while for many people here, I guess that being doing something great (and then receiving props for it) is incentive enough.
(Perhaps you should call this "innovative proverb maker of the year"
Thanks for this. I really think we should have more paper summaries like this, on a regular basis.
There’s a point that caught my attention
Longtermism, aggregation, and catastrophic risk (Emma J. Curran)
[…]
This argument relies on an aggregative view where we should be driven by sufficiently many small harms outweighing a smaller number of large harms. However there are some cases where we might say such decision-making is impermissible e.g. letting a man get run over by a train instead of pulling a lever to save the man but also make lots of people late for work. One argument for why it’s better to save the man from death is the separateness of persons - there is no actual person who experiences the sum of the individual harms of being late - so there can be no aggregate complaint.
I really liked this paper and its whole argument. On the other hand, and I here I’m probably even going against the usual deontologist literature, I’m not sure that the problem with these counter-intuitive examples of aggregating small harms / pleasures is aggregation per se, but that in such cases hedonist aggregation tends to conflict with other types of aggregation – such as through a preference-based ordinal social welfare function (for instance, if every individual prefers a slight delay to having someone killed, then nobody should be killed) – or that they might violate something like a Golden Rule (if I wouldn’t want to die to avoid millions of minor delays, then I must not want to let someone die to avoid small delays). I suspect that just saying, like Rawls and Scanlon etc., that aggregation violates “separateness of persons” turns an interesting discussion into a “fight between strawmen"[1]
- ^
EAs sometimes ridicule people for siding with deontologists in such dilemmas. Rob Wiblin once said to A. Mogensen (during an 80kh podcast interview) that:
“[...] at least for myself, as I mentioned, I actually don’t share this intuition at all, that there’s no number of people who could watch the World Cup where it would be justified to allow someone to die by electrocution. And in fact, I think that intuition that there’s no number is actually crazy and ridiculous and completely inconsistent with other actions that we take all the time.”
If you agree with Rob’s statement, ask yourself questions like:
a) Would you die to allow millions to watch the World Cup?
b) Would you want someone to die to allow you to watch the World Cup - if that’s the only way?
c) Would you support a norm (or vote for a law) stating that it is OK to let people die so we can watch the World Cup?
d) If we were to vote to let Bernard die for us to watch the World Cup, would you vote yes?
e) Do you think others would (usually) answer “yes” to these previous questions?
Nothing here contradicts that we do let people die (though in situations where they voluntarily choose to take some risk in exchange of fair previous compensation) for us to watch the World Cup; not even that the world is a “better place” (in the sense that, e.g., there’s more welfare) if people die for our watching the World Cup. It might be the optimal policy, indeed.
But I think that, if you answered “no” to some of the questions above, you are not entitled to say that this intuition is “crazy and ridiculous”. After all, if you prefer to save a life to watching the World Cup, and if you think others would reason similarly, why do you think that it is “crazy” to state that we should interrupt the show to save one person?
It’s true that I might be conflating individual preferences and moral preferences / judgment here, but I am not sure about how easy it is to separate them; I’d probably lose any pleasure in watching a match if I knew someone unwillingly died for it – and I would certainly not say “Well, too bad; but by the Sure Thing Principle, it should not affect my preferences – may they have not died in vain”. Just like in the literature about the connection between perception and judgment, particularly when it comes to providing contexto, I think our individual preferences and mental states are deeply connected to more abstract judgments regarding norms.
Sorry for this long footnote, since it's not exaclty related to the core of the post, I felt it'd be inappropriate to insert it in the main comment.
This is awesome. Thanks for the post.
However, I'd really like to know more about how this (and the corresponding Brussels effect) could interact wit topics such as:
- ESG Financial disclosures regulations regarding animal welfare;
- Countries' resistance in complying with EU agencies recommandations;
- Countries' internal laws and practices disregarding animal rights / welfare - e.g., Portugal's Supreme Court striking criminal laws regarding torturing animals.
I couldn't avoid noticing that TIME didn't mention her case
It turns out that I changed my mind again. I don't see why we couldn't establish pigouvian taxes for (some?) c-risks. For instance, taxing nuclear weapons (or their inputs, such as nuclear fuel) according to some tentative guesstimate of the "social cost of nukes" would provide funding for peace efforts and possibly even be in the best interest of (most of?) current nuclear powers, as it would help slow down nuclear proliferation. This is similar to Barratt et al.'s paper on making gian of function researchers buy insurance.
Hedonist Utilitarian Philosopher screams in agony: "Whaaaaat? So we have actual evidence of the existence of immortal rational sentient beings that can regenerate almost any damage and YOU WANT TO KILL them? LARRY, YOU MONSTER!"
An objection to the non-identity problem: shouldn't disregarding the welfare of non-existent people preclude most interventions on child mortality and education?
One objection against favoring the long-term future is that we don't have duties towards people who still don't exist. However, I believe that, when someone presents a claim like that, probably what they want to state is that we should discount future benefits (for some reason), or that we don't have a duty towards people who will only exist in the far future. But it turns out that such a claim apparently proves too much; it proves that, for instance, we have no obligation to invest on reducing the mortality of infants less than one year old in the next two years
The most effective interventions in saving lives often do so by saving young children. Now, imagine you deploy an intervention similar to those of Against Malaria Foundation - i.e., distributing bednets to reduce contagion. At the beggining, you spend months studying, then preparing, then you go to the field and distribute bednets, and then one or two years later you evaluate how many malaria cases were prevented in comparison to a baseline. It turns out that most cases of averted deaths (and disabilities and years of life gained) correspond to kids who had not yet been conceived when you started studying.
Similarly, if someone starts advocating an effective basic education reform today, they will only succeed in enacting it in some years - thus we can expect that most of the positive effects will happen many years later.
(Actually, for anyone born in the last few years, we can expect that most of their positive impact will affect people who are not born yet. If there's any value in positivel influencing these children, most of it will happen to people who are not yet born)
This means that, at the beggining of this project, most of the impact corresponded to people who didn't exist yet - so you were under no moral obligation to help them.
Is this a setback in animal welfare laws? https://www.publico.pt/2023/01/18/sociedade/noticia/ministerio-publico-pede-inconstitucionalidade-norma-lei-maus-tratos-animais-2035566 I was surprised that Portuguese constitutional legal doctrine prevented criminalizing torturing animals
https://www.tribunalconstitucional.pt/tc/acordaos/20210867.html There are quite definitive precedents
Oh I was hoping you would propose this: https://www.smbc-comics.com/comic/the-end-of-history
Sorry for the joke, I actually like your idea. But the military indeed sorta prevent having wars by doing military exercises to expensively signal strength and capabilities. That's how we have prevented WW III so far. So the crux is doing this without such an economic waste.
Thanks for this review. I'm linking here another post commenting a previous review for those interested in the subject. https://forum.effectivealtruism.org/posts/gcPp2bPin3wywjnGH/is-space-colonization-desirable-review-of-dark-skies-space
On UAP and glitches in the matrix: I sometimes joke that, if we ever build something like a time machine, we should go back in time and produce those phenomena as pranks on our ancestors, or to "ensure timeline integrity." I was even considering writing an April Fool's post on how creating a stable worldwide commitment around this "past pranks" policy (or, similarly, committing to go back in time to investigate those phenomena and "play pranks" only if no other explanation is found) would, by EDT, imply lower probabilities of scary competing explanations for unexplained phenomena - like aliens, supernatural beings or glitches in the matrix. (another possible intervention is to write a letter to superintelligent descendants asking them to, if possible, go back in time to enforce that policy... I mean, you know how it goes)
(crap I just noticed I'm plagiarizing Interstellar!)
So it turns out that, though I find this whole subject weird and amusing, and don't feel particularly willing to dedicate more than half an hour to it... the reasoning seems to be sound, and I can't spot any relevant flaws. If I ever find myself having one of those experiences, I do prefer to think "I'm either hallucinating, or my grandkids are playing with the time machine again"
I really can't evaluate all of your claims, but I'd personally like to see more native English-speakers grasping how lucky they are
(the disease seems to be worse in common law countries). Vide Brookings:
Actually, the report you linked blames it mostly on increases in income and housing prices.
We do find empirical evidence consistent with two hypotheses. The first is that the demand for more expensive Interstate highways increases with income, as either richer people are willing to pay for more expensive highways or in any case they can have their interests heard in the political process. The doubling in real median per capita income over the period accounts for roughly half of the increase in expenditures per mile over the period. Also consistent with this, and with the finding that the increased costs are due to increased inputs, not per unit input prices, we show that states construct more ancillary structures, such as bridges and ramps, and more wiggly routes in later years of the program. Controls for home value also account for a large proportion of the temporal increase; taken together, income and home value increases account for almost all the temporal change in costs.
And about "citizen voice":
The second hypothesis with which our data are consistent is the rise of “citizen voice” in the late 1960s and early 1970s. [...] Some of these tools, such as environmental review, were directly aimed at increasing the cost of government behavior, by requiring the government to fully internalize the negative externalities of Interstate construction. Other new tools, such as mandated public input, could yield construction of additional highway accoutrement (such as noise barriers), create delays, or increase planning costs.
I think that's a problem that countries without rule of law don't have... but then they have other obstacles for development.
But most of all, I'm sorry, but I'm sort of confused about what exactly is your point here - if, e.g., it's about legal interferences in general, or only about rights-based litigation, or about how those interferences make us lag behind those who don't have it. More precisely, I'm in doubt between something like:
i. "we shouldn't create legal interferences with tech development, they are inefficient and slow down economic development - and there will be less welfare in the long-run";
ii. "we shouldn't create legal interferences with tech development, otherwise we'll be surpassed by countries and organizations who don't mind about them";
iii. "we shouldn't create rights-based legal interference with tech development, as it increases litigaton and is more inefficient than top-down regulation, or than self-regulation".
I disagre with (i), because I think that the costs of slowing down c-risks are worth it. Perhaps you disagree with me, but then I see no point in discussing it here (I mean, my question assumes that are willing to incur some costs to mitigate c-risks).
I sort of disagree with (ii), because I think that "hawkish arms-race" reasoning is precisely one of the main factors driving c-risks up; on the other hand, I have to reckon the risk of playing dove and of "regulatory arbitrage": regulation is ineffective if companies can just move to somewhere where it doesn't apply (or if they lose marketshare to companies in those places), and the risks remain. But there might be ways to mitigate this problem - e.g., EU taxing imports to prevent carbon leakage.
I feel tempted to agree with (iii); but then, I'm not sure if that's an option at all, at least for now. Quite the opposite: top-down regulation will often come after precedents recognizing some rights, and self-regulation usually aims to respond to litigation and reputational risks.
Thanks for the post. I was talking to Leo yesterday ... do you think it'd be interesting to have something like "country profiles" for EA?
So... your point is that it could lead to justices (i) curtailing AI development & (ii) risking the whole semiconductor industry world wide by locking it in Taiwan? That's a long slippery slope (but you could say it's not so much longer than climate change leading to famine...) First, I'm not sure if I want that Taiwan becomes "replaceable" as a leading manufacturer, as it'd make it more likely to be invaded (though decreasing the odds of nuclear powers confrontation). but that's beyond the point. Second... yeah, I think there's a risk of abuse in inflating legal concepts. But I am not sure courts are that powerful, nor so daring. I could imagine a judge ruling against things they can understand may lead to harm, such as gain of function or "murderbots" research, especially if there's an example where it has caused harm, but not against tech development in general. But ultimately, yes, I think the risk of court abuse is one of the problems in extending legal doctrines to catastrophic risk mitigation.
I partially agree. This "human rights inflation" has been a powerful critique against legal activism in jurisprudence. I'm afraid UN should be way more specific concerning what could be retarded as a violation of such rights. On the other hand, if one truly believed that, e.g., nuclear weapons proliferation risks leading to a global catastrophe, then why couldn't one say that it risks violating human rights, too - just like, e.g., failing to deter torture? It's certainly not a matter of impact... is it a matter of probability?
Actually, I was just reminding that [spoiler alert] Jalaketu was willing to destroy the world in order to destroy Hell. But he eventually compromised, causing only nuclear war and killing his kids to go to Hell and destroy it.
Thanks for the post. It's a good review I plan to consult whenever I need a reference on farmed animals welfare. However, I believe that, at least for now, focusing on chicken prices / demand will likely be more effective than throwing disturbing truths on people. If you have any particular suggestions regarding this, I'd like to know.
Thanks for the post, and I mostly agree with it; I don't see it as "out of box" at all. I think there's nothing more effective in farmed animals welfare than increasing animal products prices in relation to the prices of vegetables, and the trends in beef and chicken prices and consumption in the last few years support your premises. However, I wonder if you're considering this obstacle: The beef industry has a lot more in common with the chicken industry than with vegans. My prior is that they're more likely to support legislation with incentives for animal farming in general than to encourage enhancing regulation increasing the price of chicken products. And an increase in beef production might make vegetables more expensive, precisely because beef requires so much resources. You may well end up with more than you asked for.
Thanks for this impressive investigation! do you intend to publish it on RP's research page? I'd like to share it with or cite it to people working on deforestation and climate change, and I suspect it'd look more legit for non-EA people if it wasn't on the Forum.
There's also something like an optics problem... at least for outsiders (by which I mean most people, including myself), when an AI developer voices concerns over AI safety / ethics and then develops an application without having those issues solved, I feel tempted to conclude that either it's a case of insincerity (and talking about AI safety is a case of ethics washing, or of attracting talent without increasing compensation)... or people are willingly courting doom.
Totally unrelated to the core of the matter, but do you intend to turn this into a frontpage post? I'm a bit inclined to say it'd be better for transparency, and to inform others about the bans, and deter potential violators.... but I'm not sure, maybe you have a reason for preferring the shortform (or you'll publish periodical updates on the frontpage
Sorry if this a dumb question, but there are so many comments (and I'm on my phone) that I got confused: you, Lauren, removed your remark regarding spontaneous abortions, because they were met by the appendix. However, while Ariel does "bite the bullet" of Ord's Scourge paper, I don't see it making any difference. I'd expect that interventions on reducing miscarriages to be probably more tractable and scalable, and way less costly - so more effective than reducing intentional abortions. We don't have to save all of the 200m embryos; just like there's no way of saving all those lost in voluntary abortions. So, Ariel, the disclaimer that the research only focuses on voluntary abortion reduction sounds ad hoc; as if AMF said they were only focuses on saving lives through bednets, which is not accurate: instead, they think that such projects are the best way, for them, to save lives. Or perhaps there's a way to rephrase and clarify the disclaimer to account for this; e g., you're less concerned about abortion as a cause area and more about a moral constraint regarding projects - i.e., just like we shouldn't fund projects leading to work abuse, we shouldn't fund projects leading to abortion. Sometimes, I think that's your point. Or is there a particular moral difference between preventing a (statistically predictable) spontaneous abortion and preventing an intentional one, per se?
Second, if we bite that bullet, and actually you see abortion reduction as a cause area, there's another Scourge unmet by the appendix: discarded frozen embryos. It would be even easier to proscribe that, or (if you think that discarding per se is the issue, instead of just keeping them) to demand they are keeping frozen indeterminately. What do you think about it, Ariel? Please, sorry if I'm missing something here
Still on billionaire philanthropy, regarding Question "6. Permissible donor influence": it'd be interesting to consider not only how depending on a smal concentrated set of donors may pose a risk of undue influence, but also how this creates a problem of "few points of failure".
a) With FTX collapse, crypto financial crisis and the tech stocks low prices... EA suddenly appears to be more funding constrained than one year ago, and needing to manage rep risks, right after having made great plans when people thought there was a "funding overhang".
b) SBF actually had made our major sources of funding appear to be less concentrated - we went from "relying mostly on Open Phil" to "... also on FTX."
First, I'd like to thank you both for this instructive discussion, and Thorstad for the post and the blog. Second, I'd like to join the fray and ask for more info on what might be the next chaters in the climate series. I don't think it is a problem if you only focus on "Ord vs. Halstead", but then perhaps you should make it more explicit, or people may take it as the final word on the matter.
Also, I commend your analysis of Ord, because I've seen people take his estimate as authoritative (e.g., here), instead of a guesstimate updated on a prior for extinction. However, to be fair to Ord, he was not running a complex scenario analysis, but basically updating from the prior for human extinction, conditioned on no major changes. That's very different from Halstead's report, so it might be proper to have a caveat emphasizing the differences in their scopes and methodologies (I mean, we can already see that in the text, but I'd not count on a readers inferential capacity for this). Also, if you want to dive more into this (and I'd like to read it), there's already a thriving literature on climate change worst-case scenarios (particularly outside of EA-space) that perhaps you'd like to check - especially on climate change as a GCR that increases the odds of other man-made risks. But it's already pretty good the way it is now.
you're right. OCB didn't say sich a thing. I included a disclaimer above, instead of erasing the comment. But it's still unclear, at least for me, why (a) OCB couldn't disclose the donors' identities, nor (b) why he claimed that the funds were specifically for this purchase, so implying that effective altruists couldn't spend the 15mi any other way.
Thanks for finally providing an answer for this, but it's still unclear why Owen Cotton Barrat [see the edit] said the donor wanted to remain anonymous. [EDIT: OCB didn't say sich a thing. But it's still unclear why (a) he couldn't disclose the donors' identities, nor (b) why he claimed that the funds were specifically for this purchase, so implying that effective altruists couldn't spend the 15mi any other way]
I hereby name you Effective Cassandra. You should brag as much as you like.
On the other hand, we are assuming UBMO is willing and have enough slack to come for the rescue. That may be untrue if the heads of UBMO are particularly disappointed, or momentarily without liquidity (because of tech stocks etc.). in the latter case, nobody would broadcast there's a problem, because it'd only make things worse, and soon it'd all be over.
Effective Cassandra
A forecasting contest (in addition to the EA criticism contest) to answer "what is the worst hazard that will happen to the EA community in the next couple of years?" (or whatever period people think is more adequate). The best responses, selected by a jury on the basis of their usefulness and justification, receive the first prize; the second prize will be given two years later to the forecaster who predicts the actual answer.
Another neglected way out is to precisify our notion of causality used in DDA (and in ordinary language) so as to include conceptions of explanation and credit attribution, thus exempting liability for random effects. MacAskill and Mogensen come close to contemplating this point in section 3.3, but then they focus on the Arms Trader example, which is close to a strawman here, and conclude:
We grant that it sometimes sounds wrong to say that you do harm to another when you initiate a causal sequence that ends with that person being harmed through the voluntary behavior of some other agent. But so far as we can see, this is entirely explained in terms of pragmatic factors like those discussed earlier: that is, in terms of conversational implicatures that typically attach to locutions associated with the ‘doing’ side of the doing/allowing distinction.
The problem with the voluntary behavior of others is not that it would necessarily exempt you of responsibility, but that it would often make your action causally irrelevant. We can say “Agent X’s action a caused event e” is ambiguous between:
- X’s action belongs to the causal chain that led to e, and
- in addition to (i), a increased the probability of e happening.
(i) is not a very useful notion of causality – basically every state of the world causes the next states (in the corresponding lightcone), because every event has repercussions.
Thus when we say that carbon emissions (caused climate change) caused floods in Lisbon in the last few days, we are not stating the obvious fact that, because of the chaotic nature of long-term climate trends, any different world history would have implied distinct rain patterns. We are rather saying that carbon emissions (and global warming) made such extreme events more likely. Also, this is not straightforwardly connected to predictability, as something might be hard to predict, but easy to explain in hindsight.
It’s kind of intuitive that we normally use a more refined notion of causality in practical reason; so, though we might blame an arms trader, we don’t even consider blaming all supply chains that made some murders possible. Thus, when we say that all of my actions will cause the identity of some future people, we are talking about (i). But the relevant notion of causality for DDA is (ii); in this sense, I may cause the identity of some future people by making some genetic pools more likely than their alternatives (for instance, by having kids, by working with fertilization, etc.). So, my mother's school teacher didn't cause my birth in anyway; my mother's marrying my father did it, though.
One's modus ponens is someone else's modus tollens.
Michael Huemer wrote something very similar In Praise of Passivity ten years ago, but he bit the deontologist bullet: so (unless you are acting inside the space defined by explicit rights and duties) if you are uncertain of the outcomes of your action, you are doing wrong.
Also, if you read the comments, and other content lately, you might notice that some people are downplaying the gravity of the SBF case, or remarking that it was an utterly unpredictable Black Swan. And some people are signaling virtue by praising the community, or expressing unconditional allegiance to it, instead of to its ideals and principles. I think we both agree this is wasting an opportunity to learn a lesson.
Perhaps you may describe this as a type of evaporative cooling, but it's a different way.
My suggestion right now is some sort of forecasting competition about what is the worst hazard that will come to EA in the next couple of years.
What are the changes that you think should be made that have the strongest case?
- Next red-teaming competition shall include a forecasting contest: "What is the worst thing to happen to EA in 2023?" First, two winners will be selected for "best entries ex ante". Then, in January, we see if anyone actually predicted the worst hazar that happened.
- Give this person a prize.
so we're waiting 1.5 month to see if Anthropic was a bag idea? On the other hand: Wytham Abbey was purchased by EV / CEA and made the news. Anthropic is a private venture. If Anthropic shows up in an argument, I can just say I don't have anything to do with that. But if someone mentioned that Wytham Abbey was bought by the guy who wrote the basics of how we evaluate areas and projects... I still don't know what to say.
I was thinking about the EA criticism contest... did anyone submit something like "FTX"? Then give that person a prize! Forecaster of the year! And second place for the best entries talking about accountability and governance. If not... then maybe it's interesting to highlight: all of those "critiques" didn't foresee the main risks that materialized in the community this year. Maybe if we had framed it as a forecasting contest instead... and yet, we have many remarkable forecasters around, and apparently none of them suggested it was dangerous to place so much faith on one person. Or it's just a matter of attention. So I ask: what is the most impactful negative event that will happen to EA community in 2023?
It's hard to feel this way, and I'm sorry you're going through this. I hope you feel better soon; perhaps it helps to remember that this is not the most productive emotion, and that you may think about this in other ways. The people you know loathe... their opinions can't touch you, unless you allow it; they probably do not hate you especifically - they don't know you, and they are probably confused about things, which is hardly their fault. So there's not very much you can gain by blaming them. And sorry if I dare to pretend to preach or give you advice, but I hope you forgive an old fool who can't resist an opportunity to cite Marcus Aurelius. Also, many of them would perhaps agree with you about reciprocity-based ethics, and there's a lot to be said about this approach to moral philosophy - especially if you enlarge the scope of your relations to include counterfactual Rawlsian compacts, or large communities (in the limit, Stoic philosophers talked about the Cosmopolis, which encompasses all sentient beings). But if you want to remain attached to this specific community... well, we are effective altruists, and our projects and goals aim to make the world a better place for all; we don't use this forum or go to events because it's fun for us, but because it aims to that end. If you truly want to cooperate with us, to reciprocate whatever happiness we might bring to you, I'm afraid you ultimately have to help benefit others, including those who might be now disturbing you.
Perhaps someone misunderstood your "let's make EA easier to critique" request
Thanks for the post, but I strongly disagree that this is the problem we're going through. Here are some things I think might be relevant that are not accounted for in this diagnosis:
First, there are some strong divides inside the movement (or among people who identify as EAs): longtermists vs. people focused on global poverty, wild animal suffering v. effective environmentalists, opinions on climate change as a GCR, etc.
Second, I don't think the problem here is just about "optics"... I was imagining that, next time I tell someone (as someone told me 5y ago, thus getting me interested in effective giving) that maybe they want to reconsider donating to their Alma Mater (because donations to universities are usually not neglected) and instead use an ITN framework to evaluate causes and projects, I might heard a reply like "Oh, you mean the ITN framework consolidated by Owen Cotton-Barrat in 2014... same guy who decided to pay 15 mi GBP for a manor-not-castle conference centre in 2022." And how can I respond to that? I'm pretty confident that OCB had good reasons, but I cannot provide them; thus the other person may just add "oh I trust the dean has reasons to act that way as well." End of discussion.
Third, probably my main issue here: we are beginning to sound a bit neurotic. I'm kinda tired of reading / arguing about EA-the-community. Some months ago, people were commenting Scott Alexander on EA's "criticim fetish" - but I think the problem might be deeper: EA forum is flooding with self-reference, meta-debates about how the community is or should be. I long for those days when the forum was full of thriving intellectual discussions on cost-benefit analysis, "nuka zaria", population ethics... you'd post a high-quality, well researched and informative text, and be super glad if it received 20 karma points... Now it's about the community, identity, self-care, etc. I'm not saying these things are not important - but, well, that's not why we're here, right? It's not that I don't appreciate something very well-written like We must be very clear: fraud in the service of effective altruism is unacceptable or don't think it deserves a lot of attention... but the very fact that we got to a point where quite obvious things like that have to be said - that we have to say that we are against specific types of felonies - and argued for aloud, and that it gets 17x more attention than, e.g., a relevant discussion on altruism and development by David Nash... I don't know how to conclude this sentence, sorry.
And ofc I realize my comment is another instance of this... it reminds me one of those horrible "relationship arguments" where a couple starts arguing about something and then the very relationship becomes the main topic - and they just can't conclude the discussion in a satisfying way.
https://www.mopp-journal.org/go-to-main-page/calls-for-papers/ A call for papers on longermism on the Moral philosophy and politics journal
Like the others, I loveed this post. let's be friends ;)
I wonder how much of this "decline of friendship" phenomenon is related to culture, region, income, education and generation. My tentative hypothesis is that people bond by doing things together in-person; this has become rarer for educated millenial westerners.
Global Ultra High Net Worth Individuals fell by 6% this year, according to Wealth-X - after steady increases in the last few years. Thus, I'm afraid the lack of funding from SBF may be the beginning of a trend - at least for community building and longtermism