I’m working on a new version of the Effective Altruism Handbook. The project is still in its initial stages, and this is the first of what I hope will be several updates as the book passes through different stages in its development.
While the underlying structure hasn’t yet been finalized, it seems likely that the new handbook will be quite different from the previous version. Notably, rather than trying to summarize EA through a series of full articles, it will (probably) contain excerpts from a much wider set of articles, taking a mix-and-match approach to summarizing key ideas.
I hope that this will make the handbook more flexible and easier to update as new ideas — and better takes on old ideas — emerge. (Currently, I plan to first publish this content as a series of Forum articles, and later compile them into a single document; I expect discussion and feedback from Forum users to substantially improve the material.)
This also means that I’m conducting a massive search for EA content, so that I can find the best explanations, quotes, and so on for every idea in the book.
My question for you: What should be in the new handbook?
What the handbook is for
My primary goal for the handbook is that it will be a good introduction to a variety of EA concepts. Someone who has no background knowledge whatsoever could, in theory, start at the beginning and read straight through, while someone who has some background could skip to whatever seems new.
(Realistically, there will be a lot of content, and most people will read it in sections, follow links away from the main text to get more detail on what most interests them, etc.)
Sample use cases:
Someone wants to introduce EA to a friend who asks them about their strange charity hobby. They send them the first section of the handbook, which details the most basic/fundamental concepts and (perhaps more importantly) why “effective altruism” exists as a concept at all.
An employee from a global development NGO is going to EA Global. They don’t know much about other cause areas, but they want to understand the conversations that will be happening around them. They prepare for the weekend by reading the handbook.
An excitable undergraduate finds their university’s EA group at an activities fair. They’re blown away by the initial presentation, and say: “I want to know all about this — what should I do?” The group’s president recommends starting with the handbook.
I don't at all expect that I’ll be able to produce the most compelling available introduction to every major EA concept at once — or even to any single concept.
However, I hope that blending together a lot of good content from various sources, and adding a few of my own explanations to bridge gaps, will allow me to give readers a more cohesive experience than they’d have exploring EA’s vast archives on their own. I’ll also be responsible for making sure the handbook’s content stays up-to-date in a way that most archival material does not.
What to send me
In priority order, I’m looking for answers to the following sub-questions:
What do you think just about everyone getting into EA ought to read?
What should just about everyone interested in (EA subtopic) be reading?
What obscure, out-of-the-way content maybe shouldn’t be read by everyone, but should still be cited in the handbook somewhere because it is very useful to a specific kind of person new to EA?
This should be in the form of links to existing content. There’s a faint possibility that I might commission original content if there’s a tricky gap I need to fill, but I think that existing content plus a bit of my own scaffolding will account for the vast majority of the book.
Some specific types of content that would be useful:
Introducing effective altruism:
What are the core ideas?
What makes EA different from other ideas/fields/intellectual movements?
Why might you want to follow the core ideas of EA? Why is it important to help others (as much as possible)?
What are some good things that have happened because of EA?
What are good things people did outside of EA that exemplify EA ideals?
How do we figure out which problems to work on?
How do we figure out which interventions to work on within a problem?
Helping people (near term):
If you want to help people in the present day, what should you be doing, and why? (This isn’t just global health, but also causes like “prison reform”)
Helping people (long term):
If you want to help people over the long term, what should you be doing, and why? (This isn’t just AI, but any cause that seems to fit into “longtermism”)
If you want to help animals (in the near term or the long term), what should you be doing, and why?
What are some other ways to help that don’t fit into the above categories?
Living EA values:
What are some ways that people act on the core ideas of EA?
What are some ways that people work together in communities to act on the core ideas of EA?
What are some common criticisms of EA? How valid/important are they?
What are some common misconceptions about EA? What’s the truth?
You can leave your answers as a comment, or PM me if you’d prefer not to share your suggestions/reasoning in public.
Notes on making suggestions
Please don’t worry about making “obvious” suggestions. Even if you’re sure I know about an article, it still helps me to know (a) that you like it, and (b) which parts of it you found most valuable (again, I expect that I’ll mostly be using excerpts).
Please don’t worry about making “weak” suggestions. Even if I don’t like something very much after I read it, I’ll still appreciate the suggestion! And even if an article isn’t great as a whole, it may contain a single perfect gem-like sentence that more than justifies the time I spent to read it.
Pictured: Me guarding my collection of perfect gem-like sentences as they slowly merge into a single handbook.
I value both substance and style. The handbook’s tone will be fairly professional, but I also want to show off the variety of styles and outlooks that characterize effective altruism. If you know an article that’s really fun to read, even if it isn’t the best take on any particular concept, I’d like to know about it.
I’m open to a wide variety of sources. In theory, anything that can be cited is fair game, including videos, books, and even social media posts — because I’m using excerpts, the source in question doesn’t necessarily even have to be EA-centric. However, the more unusual a source is, the higher the bar will be for its inclusion; if it’s a Tweet, it had better be a really dang good Tweet.
There isn’t yet a firm timeline for when the book will be released. If you have other questions about the project, you are welcome to ask, though I may not be able to give very specific answers yet.
Roles outside explicitly EA organisations are most people’s best career options.
Sometimes these roles aren’t as visible to the community, including to 80,000 Hours, but that doesn’t mean they aren’t highly impactful.
Many especially impactful roles require specific skills. If none of these roles are currently a great fit for you, but one could be if you developed the right skills, it can be worth it to take substantial time to do so.
You should use 80,000 Hours to figure out what your best career is and how to get there, not what “the” best careers are.
I'd add in different ways of having impact and how they generally compare as people often ask about why EA doesn't do much in one of the following; career, donations, volunteering, influence/voting and personal consumption
Also some articles that I've shared quite regularly with people newer to EA.
I can't say I've read previous handbooks, so I wouldn't know if this has been done previously, but I'd like to see several open questions in the EA community addressed, with the main arguments included.
Of course, there are open questions around how much animals and future people matter, but I'd also like to see other questions:
-Should EA be 'small and weird' or should it seek to grow and become 'mainstream'?
-In a related question, should EA focus on those who have the potential to influence major amounts of power or money (people who are disproportionately white, male, from a privileged background, and educated at a competitive university in a Western country)? Or should EA be inclusive and diverse, seeking to elevate the voices of people who could do the most good but would traditionally be ignored by society?
-To what extent should/does EA support systemic change?
-To what extent should we be concerned about value drift? Some argue we shouldn't delay donating even a couple years, while others argue that our future values will likely be even better than our current ones.
2. Most of EAs focus is on preventing x-risks/GCR which is correct because we can't afford to have them occur even once. Work on surviving and lessen the far future impact of x-risks is neglected. ALLFED (Alliance to Feed the Earth in Disasters) is working on feeding everyone in a catastrophe and has alot of low hanging fruits to work on for multiple disciplines. [80k podcast episode] [ALLFED papers on x-risks and solutions]
1. I didn't study something AI related and was unsure on how I could contribute in a meaningful and impactful way. I think this is a situation many new EAs face, as some of them are still studying when they hear about EA and they probably didn't choose their field of study with EA in mind. Luckily I found out about http://effectivethesis.com/ . There one can find ways to contribute. The suggested topics cover various fields from Agricultural Science, Economics, Engineering (my background) over to Sociology.
2. Regarding X-risks / GCR
Once one realizes the value of the longterm future one is eager to work on preventing x-risks/GCRs. Most of EAs work on reducing the probability of such events happening in the first place. E.g reducing the amount of nuclear weapons. I think this is the correct way of approaching these problems since for most of these scenarios humanity can't afford them to occur even once. But to contribute to AI research or to lessen the probability of a nuclear war one needs very specific skills that might not be the best fit for everyone. This can discourage new EAs.
Unfortunately for some of these scenarios (e.g super volcano, asteroid impact) the probability will never reach 0% and therefore we need to prepare. Surviving these catastrophes is often neglected. ALLFED (Alliance to Feed Earth in Disasters) is researching on feeding everyone no matter what and through that, lessen the far future impact of otherwise existential risks. Because this is neglected there are alot of low hanging fruits for people to work on. I for example started to work for ALLFED right after my undergrad / bachelors degree. People interested in this kind of work can find information here: [80k podcast episode] [ALLFED papers on x-risks and solutions]
Are there any plans to translate the handbook into the let's say 10 most popular EA languages? (thinking of spanish, german, french, russian, ...)? If not this should be a major part of the Handbook v3
Stepping back, one of the themes of that talk is that EA’s homogeneous demographics make it very susceptible to important biases. I hope the new handbook has content by a significantly more diverse set of authors (in terms of gender, race, age, geography, etc.) than the previous edition.
Thank you for the suggestion! Because so many more authors will be cited, I expect that the sourcing will be more diverse (at least, this is true of the list I've compiled so far). If there's any other content you think has been overlooked in past introductory materials, I'd be grateful to hear about it.