EA Handbook 3.0: What content should I include?
post by Aaron Gertler (aarongertler)
What the handbook is for
What to send me
specific types of content that would be useful:
Notes on making suggestions
Hello, EA Forum!
I’m working on a new version of the Effective Altruism Handbook. The project is still in its initial stages, and this is the first of what I hope will be several updates as the book passes through different stages in its development.
While the underlying structure hasn’t yet been finalized, it seems likely that the new handbook will be quite different from the previous version. Notably, rather than trying to summarize EA through a series of full articles, it will (probably) contain excerpts from a much wider set of articles, taking a mix-and-match approach to summarizing key ideas.
I hope that this will make the handbook more flexible and easier to update as new ideas — and better takes on old ideas — emerge. (Currently, I plan to first publish this content as a series of Forum articles, and later compile them into a single document; I expect discussion and feedback from Forum users to substantially improve the material.)
This also means that I’m conducting a massive search for EA content, so that I can find the best explanations, quotes, and so on for every idea in the book.
My question for you: What should be in the new handbook?
What the handbook is for
My primary goal for the handbook is that it will be a good introduction to a variety of EA concepts. Someone who has no background knowledge whatsoever could, in theory, start at the beginning and read straight through, while someone who has some background could skip to whatever seems new.
(Realistically, there will be a lot of content, and most people will read it in sections, follow links away from the main text to get more detail on what most interests them, etc.)
Sample use cases:
- Someone wants to introduce EA to a friend who asks them about their strange charity hobby. They send them the first section of the handbook, which details the most basic/fundamental concepts and (perhaps more importantly) why “effective altruism” exists as a concept at all.
- An employee from a global development NGO is going to EA Global. They don’t know much about other cause areas, but they want to understand the conversations that will be happening around them. They prepare for the weekend by reading the handbook.
- An excitable undergraduate finds their university’s EA group at an activities fair. They’re blown away by the initial presentation, and say: “I want to know all about this — what should I do?” The group’s president recommends starting with the handbook.
I don't at all expect that I’ll be able to produce the most compelling available introduction to every major EA concept at once — or even to any single concept.
However, I hope that blending together a lot of good content from various sources, and adding a few of my own explanations to bridge gaps, will allow me to give readers a more cohesive experience than they’d have exploring EA’s vast archives on their own. I’ll also be responsible for making sure the handbook’s content stays up-to-date in a way that most archival material does not.
What to send me
In priority order, I’m looking for answers to the following sub-questions:
- What do you think just about everyone getting into EA ought to read?
- What should just about everyone interested in (EA subtopic) be reading?
- What obscure, out-of-the-way content maybe shouldn’t be read by everyone, but should still be cited in the handbook somewhere because it is very useful to a specific kind of person new to EA?
This should be in the form of links to existing content. There’s a faint possibility that I might commission original content if there’s a tricky gap I need to fill, but I think that existing content plus a bit of my own scaffolding will account for the vast majority of the book.
Some specific types of content that would be useful:
- Introducing effective altruism:
- What are the core ideas?
- What makes EA different from other ideas/fields/intellectual movements?
- Why might you want to follow the core ideas of EA? Why is it important to help others (as much as possible)?
- What are some good things that have happened because of EA?
- What are good things people did outside of EA that exemplify EA ideals?
- Cause selection:
- How do we figure out which problems to work on?
- How do we figure out which interventions to work on within a problem?
- Helping people (near term):
- If you want to help people in the present day, what should you be doing, and why? (This isn’t just global health, but also causes like “prison reform”)
- Helping people (long term):
- If you want to help people over the long term, what should you be doing, and why? (This isn’t just AI, but any cause that seems to fit into “longtermism”)
- Helping animals:
- If you want to help animals (in the near term or the long term), what should you be doing, and why?
- Other causes:
- What are some other ways to help that don’t fit into the above categories?
- Living EA values:
- What are some ways that people act on the core ideas of EA?
- What are some ways that people work together in communities to act on the core ideas of EA?
- Answering questions:
- What are some common criticisms of EA? How valid/important are they?
- What are some common misconceptions about EA? What’s the truth?
You can leave your answers as a comment, or PM me if you’d prefer not to share your suggestions/reasoning in public.
Notes on making suggestions
Please don’t worry about making “obvious” suggestions. Even if you’re sure I know about an article, it still helps me to know (a) that you like it, and (b) which parts of it you found most valuable (again, I expect that I’ll mostly be using excerpts).
Please don’t worry about making “weak” suggestions. Even if I don’t like something very much after I read it, I’ll still appreciate the suggestion! And even if an article isn’t great as a whole, it may contain a single perfect gem-like sentence that more than justifies the time I spent to read it.
I value both substance and style. The handbook’s tone will be fairly professional, but I also want to show off the variety of styles and outlooks that characterize effective altruism. If you know an article that’s really fun to read, even if it isn’t the best take on any particular concept, I’d like to know about it.
I’m open to a wide variety of sources. In theory, anything that can be cited is fair game, including videos, books, and even social media posts — because I’m using excerpts, the source in question doesn’t necessarily even have to be EA-centric. However, the more unusual a source is, the higher the bar will be for its inclusion; if it’s a Tweet, it had better be a really dang good Tweet.
There isn’t yet a firm timeline for when the book will be released. If you have other questions about the project, you are welcome to ask, though I may not be able to give very specific answers yet.
Thanks in advance for your suggestions!
Comments sorted by top scores.
comment by DavidNash ·
2019-11-16T12:04:48.530Z · EA(p) · GW(p)
There was a Facebook post on top 10 concepts for people to know in EA.
Here are some of the suggestions.
- Cause neutrality
- Scale, Neglectedness and Solvability framework
- Maximising welfare
- Moral patient-hood
- Moral uncertainty
- Moral trade
- Hits-based giving
- Worldview diversification
- Comparative advantage
- Epistemic principles
- Crucial Considerations
I think some of the points in this 80,000 Hours article apply to EA in general
- We’ve been wrong before, and we’ll be wrong again
- Many of the questions we tackle are a matter of balance, and different people will benefit from considering opposing messages
- Personal fit matters, so focus more on strategies than simple answers
- There are disagreements within the community
- Treat doing good as just one of many important goals in life
Also this one - Misconceptions of 80,000 Hours research [EA · GW] (although maybe they wont be misconceptions if it is the first thing they read)
- Roles outside explicitly EA organisations are most people’s best career options.
- Sometimes these roles aren’t as visible to the community, including to 80,000 Hours, but that doesn’t mean they aren’t highly impactful.
- Many especially impactful roles require specific skills. If none of these roles are currently a great fit for you, but one could be if you developed the right skills, it can be worth it to take substantial time to do so.
- You should use 80,000 Hours to figure out what your best career is and how to get there, not what “the” best careers are.
I'd add in different ways of having impact and how they generally compare as people often ask about why EA doesn't do much in one of the following; career, donations, volunteering, influence/voting and personal consumption
Also some articles that I've shared quite regularly with people newer to EA.
Why choose a cause and how to strategically choose a cause
6 tips on choosing an effective charity [EA · GW]
Where I am donating this year
Effective altruism as question
Replies from: Prof.Weird
↑ comment by Prof.Weird ·
2020-11-17T22:58:36.012Z · EA(p) · GW(p)
I truly want Effective Altruism to flourish, but I am concerned with the EA handbooks, the backbone of the EA community, being too theoretical.
Donating and volunteering aside, there are be driving principles of EA which are not summarised as daily practices, and thus the EA community merely the theorise about them. Práctica principles such as decision making ratios between income, expense and donations or the exact tenants of thinking globally and acting locally (such as being aware of what actions will/won’t make positive/negative change). These are actionable practices distilled from the EA theory and they may go a long way towards helping individuals live altruism effectively and not just think about it.
The practice of the giving pledge, a daily practice distilled from an EA principle, is among the most galvanising and wide-spread ideas even beyond the EA community. In my undergraduate study of both marketing and psychology, I am learning the power of distilling theory into simplified, actionable steps. For this reason, it is no surprise the giving pledge is talked about in new cycles, suggested by celebrities and understood by the average person. On the other hand, I find it personally hard to keep up with the evolving, contrasting theory of EA’s global catastrophic risks, highest priorities, socialism vs capitalism debates and general philosophical questions. Reading about how nuclear weapons are an issue we should be aware of for half-a-dozen reasons doesn’t give me any idea as to what to do next. On the other hand, I’ve sent messages to my local MPs in Australia about investing in the Adani coal mines and have genuinely make a difference; I didn’t need to read an article to understand why and the practice seemed obvious, simple and easy to do.
comment by AronM ·
2019-10-11T13:51:34.429Z · EA(p) · GW(p)
2 key information helped me to have impact (after I read about EA, the core ideas and values).
1. Not only AI-researchers can do impactful work. Also engineers and other fields. See: http://effectivethesis.com/
2. Most of EAs focus is on preventing x-risks/GCR which is correct because we can't afford to have them occur even once. Work on surviving and lessen the far future impact of x-risks is neglected. ALLFED (Alliance to Feed the Earth in Disasters) is working on feeding everyone in a catastrophe and has alot of low hanging fruits to work on for multiple disciplines. [80k podcast episode] [ALLFED papers on x-risks and solutions]
1. I didn't study something AI related and was unsure on how I could contribute in a meaningful and impactful way. I think this is a situation many new EAs face, as some of them are still studying when they hear about EA and they probably didn't choose their field of study with EA in mind. Luckily I found out about http://effectivethesis.com/ . There one can find ways to contribute. The suggested topics cover various fields from Agricultural Science, Economics, Engineering (my background) over to Sociology.
2. Regarding X-risks / GCR
Once one realizes the value of the longterm future one is eager to work on preventing x-risks/GCRs. Most of EAs work on reducing the probability of such events happening in the first place. E.g reducing the amount of nuclear weapons. I think this is the correct way of approaching these problems since for most of these scenarios humanity can't afford them to occur even once. But to contribute to AI research or to lessen the probability of a nuclear war one needs very specific skills that might not be the best fit for everyone. This can discourage new EAs.
Unfortunately for some of these scenarios (e.g super volcano, asteroid impact) the probability will never reach 0% and therefore we need to prepare. Surviving these catastrophes is often neglected. ALLFED (Alliance to Feed Earth in Disasters) is researching on feeding everyone no matter what and through that, lessen the far future impact of otherwise existential risks. Because this is neglected there are alot of low hanging fruits for people to work on. I for example started to work for ALLFED right after my undergrad / bachelors degree. People interested in this kind of work can find information here: [80k podcast episode] [ALLFED papers on x-risks and solutions]
comment by Khorton ·
2019-10-01T10:22:11.147Z · EA(p) · GW(p)
I can't say I've read previous handbooks, so I wouldn't know if this has been done previously, but I'd like to see several open questions in the EA community addressed, with the main arguments included.
Of course, there are open questions around how much animals and future people matter, but I'd also like to see other questions:
-Should EA be 'small and weird' or should it seek to grow and become 'mainstream'?
-In a related question, should EA focus on those who have the potential to influence major amounts of power or money (people who are disproportionately white, male, from a privileged background, and educated at a competitive university in a Western country)? Or should EA be inclusive and diverse, seeking to elevate the voices of people who could do the most good but would traditionally be ignored by society?
-To what extent should/does EA support systemic change?
-To what extent should we be concerned about value drift? Some argue we shouldn't delay donating even a couple years, while others argue that our future values will likely be even better than our current ones.
Replies from: DavidNash, vaidehi_agarwalla
↑ comment by DavidNash ·
2019-10-01T15:30:03.830Z · EA(p) · GW(p)
I don't think addressing these questions in a handbook that's meant to introduce EA would be that useful as most of them require much more in depth reading than a few paragraphs would allow.
It may make more sense to have an FAQ for these typical questions or to say that lots of areas within EA are still being discussed, and then list the questions.Replies from: Khorton
↑ comment by vaidehi_agarwalla ·
2019-10-01T14:10:16.662Z · EA(p) · GW(p)
A more cause specific one - for animal welfare, addressing the issue of people who are vegetarian/vegan for environmental reasons versus animal suffering-related reasons
comment by Jemma ·
2019-09-30T10:05:17.767Z · EA(p) · GW(p)
This may be a touch too philosophical, but I enjoyed Derek Parfit's essay 'Personal Identity', as I think that it provides a brief insight into one of the central concerns of this major EA thinker.
comment by vaidehi_agarwalla ·
2019-09-30T18:50:13.269Z · EA(p) · GW(p)
What do you think just about everyone getting into EA ought to read?
- Key Approaches by main EA orgs
- Posts by GiveWell, OpenPhil (i.e. hits-based giving, worldview
diversification) and 80K.
- Writeups by other orgs that may differ in important places, or are just quite prominent
- Fairly detailed posts on longtermism/x-risk/AI safety - the more formal the better (not the best person to recommend these)
comment by mschons ·
2019-10-13T13:06:18.090Z · EA(p) · GW(p)
Are there any plans to translate the handbook into the let's say 10 most popular EA languages? (thinking of spanish, german, french, russian, ...)? If not this should be a major part of the Handbook v3
comment by Jon_Behar ·
2019-09-30T23:44:18.999Z · EA(p) · GW(p)
I would love to see Neil Buddy Shah’s talk Beyond Top Charities [? · GW] get included.
Stepping back, one of the themes of that talk is that EA’s homogeneous demographics make it very susceptible to important biases. I hope the new handbook has content by a significantly more diverse set of authors (in terms of gender, race, age, geography, etc.) than the previous edition.Replies from: aarongertler
↑ comment by Aaron Gertler (aarongertler) ·
2019-10-01T09:23:10.613Z · EA(p) · GW(p)
Thank you for the suggestion! Because so many more authors will be cited, I expect that the sourcing will be more diverse (at least, this is true of the list I've compiled so far). If there's any other content you think has been overlooked in past introductory materials, I'd be grateful to hear about it.