We hereby announce a new meta-EA institution - "Naming What We Can".
We believe in a world where every EA organization and any project has a beautifully crafted name. We believe in a world where great minds are free from the shackles of the agonizing need to name their own projects.
To name and rename every EA organization, project, thing, or person. To alleviate any suffering caused by name-selection decision paralysis
Using our superior humor and language articulation prowess, we will come up with names for stuff.
We are a bunch of revolutionaries who believe in the power of correct naming. We translated over a quintillion distinct words from English to Hebrew. Some of us have read all of Unsong. One of us even read the whole bible. We spent countless fortnights debating the in and outs of our own org’s title - we Name What We Can.
What Do We Do?
We're here for the service of the EA community. Whatever you need to rename - we can name. Although we also rename whatever we can. Even if you didn't ask.
As a demonstration, we will now see some examples where NWWC has a much better name than the one currently used.
80,000 Hours => 64,620 Hours. Better fits the data and more equal toward women, two important EA virtues.
Charity Entrepreneurship => Charity Initiatives. (We don't know anyone who can spell entrepreneurship on their first try. Alternatively, own all of the variations: Charity Enterpeneurship, Charity Entreprenreurshrip, Charity Entrepenurship, Charity Entepenoorship, …)
Global Priorities Institute => Glomar Priorities Institute. We suggest including the dimension of time, making our globe a glome.
OpenPhil => Doing Right Philanthropy. Going by Dr.Phil would give a lot more clicks.
EA Israel => זולתנים יעילים בארץ הקודש
ProbablyGood => CrediblyGood. Because in EA we usually use credence rather than probability.
EA Hotel => Centre for Enabling EA Learning & Research.
Giving What We Can => Guilting Whoever We Can. Because people give more when they are feeling guilty about being rich.
Cause Prioritization => Toby Ordering.
Max Dalton => Max Delta. This represents the endless EA effort to maximize our ever-marginal utility.
Will MacAskill => will McAskill. Evidently a more common use:
Peter singer & steven pinker should be the same person, to avoid confusion.
OpenAI => ProprietaryAI. Followed by ClosedAI, UnalignedAI, MisalignedAI, and MalignantAI.
FHI => Bostrom's Squad.
GiveWell => Don'tGivePlayPumps. We feel that the message could be stronger this way.
Doing Good Better => Doing Right Right.
Electronic Arts, also known as EA, should change its name to Effective Altruism. They should also change all of their activities to Effective Altruism activities.
Overall, we think the impact of the project will be net negative on expectation (see our Guesstimate model). That is because we think that the impact is likely to be somewhat positive, but there is a really small tail risk that we will cause the termination of the EA movement. However, as we are risk-averse we can mostly ignore high tails in our impact assessment so there is no need to worry.
Call to action
As a first step, we offer our services freely here on this very post! This is done to test the fit of the EA community to us. All you need to do is to comment on this post and ask us to name or rename whatever you desire.
Additionally, we hold a public recruitment process here on this very post! If you want to apply to NWWC as a member, comment on this post with a name suggestion of your choosing! Due to our current lack of diversity in our team, we particularly encourage women, people of color, LGBTQ+, people from the social sciences, people who are not mathematicians, people over 40, conservatives, conservationists, non-consequentialists, people who have never heard of EA, and Rob Wiblin to apply.
If this experiment will work well, we have plans for expanding our efforts which we don’t currently publish due to potential infohazards that have yet to pass all stages of approval by the Infohazard Church.
Or, you could change your name to Wise Julia. This will also allow you to signify your intellectual superiority.
Tail risk: if EA ends up voting for a top leader, and you get elected, this could sound pretty culty. If that risk seems significant to you, I would advise avoiding the obvious choice here - Julia the Wise - which is even worse.
I think the phrases "Research Institute", and particular "...Existential Risk Institute" are a best practice and should be used much more frequently.
Centre for Effective Altuism -> Effective Altruism Research Institute (EARI) Open Philanthropy -> Funding Effective Research Institute (FERI) GiveWell -> Shortermist Effective Funding Research Institute (SEFRI) 80,000 Hours -> Careers that are Effective Research Institute (CERI) Charity Entrepreneurship -> Charity Entrepreneurship Research Institute (CERI 2) Rethink Priorities -> General Effective Research Institute (GERI) Center for Human-Compatible Artificial Intelligence -> Berkeley University Ai Research Institute (BUARI) CSER -> Cambridge Existential Risk Institute (CERI 3) LessWrong -> Blogging for Existential Risk Institute (BERI 2) Alignement Forum -> Blogging for AI Risk Institute (BARI) SSC -> Scott Alexanders' Research Institute (SARI)
Thanks for your valuable critique! I've updated our model accordingly.
Must say that I should have been more skeptical when my calculation resulted in a post that's worth 0.4 QALY. Now, after also raising our estimates for total Karma (wow!) we estimate our impact as 0.018 QALYs, which makes more sense.
Perfect! In the end the impact will of course be orders of magnitude higher, as a slightly better name of any particular organization will affect tens if not hundreds of thousands of people in the long run. And there may even be a tail chance of better names increasing the community's stability and thus preventing collapse scenarios.
I think overall you really undersold your project with that guesstimate model only focusing on this post only, as if that was all there is to it.
I suggest Measuring Everything with units Of Wellbeing, or, in short, Meow. This might support the new field of increasing global welfare through kitten distribtuion, as been proposed before [EA(p) · GW(p)].
While I'm writing, I'll mention I seriously proposed calling HLI the Bentham Institute for Global Happiness (BIGHAP), but it was put to an internal vote and I, tragically, lost. I am fairly confident not calling it BIGHAP will be my biggest deathbed regret.
I think that the forum itself is nothing without the people and the community within. We, the users, are the ones that upvote or downvote posts. From this emerges [LW · GW]a collective intelligence that deems what is worthy for the EA community and what should be strongly downvoted to oblivion, which in return explains what content gets written.
I propose to call this collective intelligence TheKarma Police.