EA Handbook 3.0: What content should I include?

post by aarongertler · 2019-09-30T09:17:55.464Z · score: 42 (22 votes) · EA · GW · 17 comments

Contents

  What the handbook is for
  What to send me
    Some specific types of content that would be useful:
  Notes on making suggestions
None
17 comments

Hello, EA Forum!

I’m working on a new version of the Effective Altruism Handbook. The project is still in its initial stages, and this is the first of what I hope will be several updates as the book passes through different stages in its development.

While the underlying structure hasn’t yet been finalized, it seems likely that the new handbook will be quite different from the previous version. Notably, rather than trying to summarize EA through a series of full articles, it will (probably) contain excerpts from a much wider set of articles, taking a mix-and-match approach to summarizing key ideas.

I hope that this will make the handbook more flexible and easier to update as new ideas — and better takes on old ideas — emerge. (Currently, I plan to first publish this content as a series of Forum articles, and later compile them into a single document; I expect discussion and feedback from Forum users to substantially improve the material.)

This also means that I’m conducting a massive search for EA content, so that I can find the best explanations, quotes, and so on for every idea in the book.

My question for you: What should be in the new handbook?

What the handbook is for

My primary goal for the handbook is that it will be a good introduction to a variety of EA concepts. Someone who has no background knowledge whatsoever could, in theory, start at the beginning and read straight through, while someone who has some background could skip to whatever seems new.

(Realistically, there will be a lot of content, and most people will read it in sections, follow links away from the main text to get more detail on what most interests them, etc.)

Sample use cases:

I don't at all expect that I’ll be able to produce the most compelling available introduction to every major EA concept at once — or even to any single concept.

However, I hope that blending together a lot of good content from various sources, and adding a few of my own explanations to bridge gaps, will allow me to give readers a more cohesive experience than they’d have exploring EA’s vast archives on their own. I’ll also be responsible for making sure the handbook’s content stays up-to-date in a way that most archival material does not.

What to send me

In priority order, I’m looking for answers to the following sub-questions:

This should be in the form of links to existing content. There’s a faint possibility that I might commission original content if there’s a tricky gap I need to fill, but I think that existing content plus a bit of my own scaffolding will account for the vast majority of the book.

Some specific types of content that would be useful:

You can leave your answers as a comment, or PM me if you’d prefer not to share your suggestions/reasoning in public.

Notes on making suggestions

Please don’t worry about making “obvious” suggestions. Even if you’re sure I know about an article, it still helps me to know (a) that you like it, and (b) which parts of it you found most valuable (again, I expect that I’ll mostly be using excerpts).

Please don’t worry about making “weak” suggestions. Even if I don’t like something very much after I read it, I’ll still appreciate the suggestion! And even if an article isn’t great as a whole, it may contain a single perfect gem-like sentence that more than justifies the time I spent to read it.

Pictured: Me guarding my collection of perfect gem-like sentences as they slowly merge into a single handbook.


I value both substance and style. The handbook’s tone will be fairly professional, but I also want to show off the variety of styles and outlooks that characterize effective altruism. If you know an article that’s really fun to read, even if it isn’t the best take on any particular concept, I’d like to know about it.

I’m open to a wide variety of sources. In theory, anything that can be cited is fair game, including videos, books, and even social media posts — because I’m using excerpts, the source in question doesn’t necessarily even have to be EA-centric. However, the more unusual a source is, the higher the bar will be for its inclusion; if it’s a Tweet, it had better be a really dang good Tweet.


There isn’t yet a firm timeline for when the book will be released. If you have other questions about the project, you are welcome to ask, though I may not be able to give very specific answers yet.

Thanks in advance for your suggestions!

17 comments

Comments sorted by top scores.

comment by pmelchor · 2019-09-30T11:22:40.469Z · score: 15 (5 votes) · EA · GW

For the introduction, I liked and shared Will MacAskill's text for Norton Introduction to Ethics:

https://drive.google.com/file/d/1xs22x9UIuvym--MfAUtQsZ-GVqTqXeEs/view

comment by richard_ngo · 2019-10-01T11:53:25.435Z · score: 12 (3 votes) · EA · GW

Here's my (in-progress) collation of important EA resources, organised by topic. Contributions welcome :)

comment by alexlintz · 2019-10-01T18:09:43.948Z · score: 9 (6 votes) · EA · GW

I always recommend Nate Soares' post 'On Caring' to motivate the need for rational analysis of problems when trying to do good. http://mindingourway.com/on-caring/


comment by pmelchor · 2019-09-30T11:31:59.442Z · score: 8 (5 votes) · EA · GW

For cause selection (and the INT model), I find this 80K article more accessible and explanatory than most:

https://80000hours.org/articles/problem-framework/

comment by AronM · 2019-10-11T13:51:34.429Z · score: 7 (2 votes) · EA · GW

2 key information helped me to have impact (after I read about EA, the core ideas and values).

Short version:

1. Not only AI-researchers can do impactful work. Also engineers and other fields. See: http://effectivethesis.com/

2. Most of EAs focus is on preventing x-risks/GCR which is correct because we can't afford to have them occur even once. Work on surviving and lessen the far future impact of x-risks is neglected. ALLFED (Alliance to Feed the Earth in Disasters) is working on feeding everyone in a catastrophe and has alot of low hanging fruits to work on for multiple disciplines. [80k podcast episode] [ALLFED papers on x-risks and solutions]

Longer Version:

1. I didn't study something AI related and was unsure on how I could contribute in a meaningful and impactful way. I think this is a situation many new EAs face, as some of them are still studying when they hear about EA and they probably didn't choose their field of study with EA in mind. Luckily I found out about http://effectivethesis.com/ . There one can find ways to contribute. The suggested topics cover various fields from Agricultural Science, Economics, Engineering (my background) over to Sociology.

2. Regarding X-risks / GCR

Once one realizes the value of the longterm future one is eager to work on preventing x-risks/GCRs. Most of EAs work on reducing the probability of such events happening in the first place. E.g reducing the amount of nuclear weapons. I think this is the correct way of approaching these problems since for most of these scenarios humanity can't afford them to occur even once. But to contribute to AI research or to lessen the probability of a nuclear war one needs very specific skills that might not be the best fit for everyone. This can discourage new EAs.

Unfortunately for some of these scenarios (e.g super volcano, asteroid impact) the probability will never reach 0% and therefore we need to prepare. Surviving these catastrophes is often neglected. ALLFED (Alliance to Feed Earth in Disasters) is researching on feeding everyone no matter what and through that, lessen the far future impact of otherwise existential risks. Because this is neglected there are alot of low hanging fruits for people to work on. I for example started to work for ALLFED right after my undergrad / bachelors degree. People interested in this kind of work can find information here: [80k podcast episode] [ALLFED papers on x-risks and solutions]

comment by Tobias_Baumann · 2019-10-01T09:02:26.169Z · score: 7 (8 votes) · EA · GW

I'd like to suggest including an article on reducing s-risks (e.g. https://foundational-research.org/reducing-risks-of-astronomical-suffering-a-neglected-priority/ or http://s-risks.org/intro/) as another possible perspective on longtermism, in addition to AI alignment and x-risk reduction.

comment by vaidehi_agarwalla · 2019-09-30T18:50:13.269Z · score: 6 (4 votes) · EA · GW

What do you think just about everyone getting into EA ought to read?

  • Key Approaches by main EA orgs
    • Posts by GiveWell, OpenPhil (i.e. hits-based giving, worldview diversification) and 80K.
    • Writeups by other orgs that may differ in important places, or are just quite prominent
  • Fairly detailed posts on longtermism/x-risk/AI safety - the more formal the better (not the best person to recommend these)
comment by Jemma · 2019-09-30T10:05:17.767Z · score: 6 (4 votes) · EA · GW

This may be a touch too philosophical, but I enjoyed Derek Parfit's essay 'Personal Identity', as I think that it provides a brief insight into one of the central concerns of this major EA thinker.

comment by vaidehi_agarwalla · 2019-09-30T18:52:47.640Z · score: 4 (3 votes) · EA · GW

What should just about everyone interested in (EA subtopic) be reading?

  • this is a good starting place for several intro readings to topics: https://resources.eahub.org/learn/reading-lists/
  • For some of the cause areas, it might be a good to discuss relative levels of openness of discussion and why that's the case (i.e. for AI Safety or security-related stuff)
comment by mschons · 2019-10-13T16:36:11.239Z · score: 3 (3 votes) · EA · GW

Is there a plan to make an audio book out of the handbook? I think many people would find that useful.

comment by Khorton · 2019-10-01T10:22:11.147Z · score: 3 (5 votes) · EA · GW

I can't say I've read previous handbooks, so I wouldn't know if this has been done previously, but I'd like to see several open questions in the EA community addressed, with the main arguments included.

Of course, there are open questions around how much animals and future people matter, but I'd also like to see other questions:

-Should EA be 'small and weird' or should it seek to grow and become 'mainstream'?

-In a related question, should EA focus on those who have the potential to influence major amounts of power or money (people who are disproportionately white, male, from a privileged background, and educated at a competitive university in a Western country)? Or should EA be inclusive and diverse, seeking to elevate the voices of people who could do the most good but would traditionally be ignored by society?

-To what extent should/does EA support systemic change?

-To what extent should we be concerned about value drift? Some argue we shouldn't delay donating even a couple years, while others argue that our future values will likely be even better than our current ones.

comment by DavidNash · 2019-10-01T15:30:03.830Z · score: 11 (7 votes) · EA · GW

I don't think addressing these questions in a handbook that's meant to introduce EA would be that useful as most of them require much more in depth reading than a few paragraphs would allow.

It may make more sense to have an FAQ for these typical questions or to say that lots of areas within EA are still being discussed, and then list the questions.

comment by Khorton · 2019-10-01T17:05:27.159Z · score: 3 (2 votes) · EA · GW

I think you're right - these questions aren't right for an introductory handbook.

comment by vaidehi_agarwalla · 2019-10-01T14:10:16.662Z · score: 1 (1 votes) · EA · GW

A more cause specific one - for animal welfare, addressing the issue of people who are vegetarian/vegan for environmental reasons versus animal suffering-related reasons

comment by Jon_Behar · 2019-09-30T23:44:18.999Z · score: 3 (8 votes) · EA · GW

I would love to see Neil Buddy Shah’s talk Beyond Top Charities get included.

Stepping back, one of the themes of that talk is that EA’s homogeneous demographics make it very susceptible to important biases. I hope the new handbook has content by a significantly more diverse set of authors (in terms of gender, race, age, geography, etc.) than the previous edition.

comment by aarongertler · 2019-10-01T09:23:10.613Z · score: 2 (1 votes) · EA · GW

Thank you for the suggestion! Because so many more authors will be cited, I expect that the sourcing will be more diverse (at least, this is true of the list I've compiled so far). If there's any other content you think has been overlooked in past introductory materials, I'd be grateful to hear about it.

comment by mschons · 2019-10-13T13:06:18.090Z · score: 2 (2 votes) · EA · GW

Are there any plans to translate the handbook into the let's say 10 most popular EA languages? (thinking of spanish, german, french, russian, ...)? If not this should be a major part of the Handbook v3