Is there a good place to find the "what we know so far" of the EA movement?
post by Evan Rysdam
This is a question post.
My question here is a little bit broad, so I'm going to start by introducing myself so that you know what my background is.
I'm familiar with the rationalist movement — I've read about the first third of the sequences and integrated some of the lessons into my life, such as scanning my intuitive judgments for conjunction errors and avoiding making general claims unless at least a few examples spring readily to mind. I have also read the rationalist material on how to use words properly. Specifically, I understand now that even when reality doesn't segment itself cleanly into categories, you can still define words by just directing somebody's attention to the similarity cluster you're talking about. From the world of rationality-adjacent material, I have read Nate Soare's Replacing Guilt series.
I'm mostly new to the EA movement. What I know about it can be summarized as "EA is a group of people that use science to figure out which charities are the most cost-effective".
Here's my question: Where can I go to get caught up on what the EA movement has "figured out so far"? Is there something like an EA equivalent of the LessWrong sequences?
Things I might expect to find include:
- Introductions to concepts that are important to the EA movement.
- Insights concerning how we should measure and think about how "effective" a charity is.
- An overview of the world's biggest problems (according to the EA movement) or maybe the problems with the best ratio of marginal improvement to marginal effort.
answer by Pablo (Pablo_Stafforini)
) · GW
Hi, and welcome!
What I know about it can be summarized as "EA is a group of people that use science to figure out which charities are the most cost-effective".
This summary would describe the "effective giving" movement. EA is not restricted to cost-effective charitable donations, but extends to all ways of doing good. In other words, EA isn't only cause neutral, but also means neutral; it prejudges neither which causes are best nor which means should be pursued to promote those causes.
Where can I go to get caught up on what the EA movement has "figured out so far"? Is there something like an EA equivalent of the LessWrong sequences?
There is no equivalent of the "sequences". A good introduction is Will MacAskill's Doing Good Better (disclaimer: I helped Will with some of the research). Then you may want to take a look at 80,000 Hours' List of the most urgent global issues and follow the links to the relevant problems. In addition, at the end of this comment I list a bunch of posts that I believe exemplify some of the best writing of the EA blogosphere. Of course, this is just my own opinion, and others may question some of the inclusions or omissions. [Edit: You may also want to check out the EA Handbook [? · GW]. I didn't mention it initially because I'm only familiar with the 1st edition, and the current version has been substantially revised.]
Introductions to concepts that are important to the EA movement.
See this list of concepts put together by the Centre for Effective Altruism and this other list by Peter McIntyre. [Edit: See also 80,000 Hours' key ideas, which I hadn't noticed until both Kevin [EA(p) · GW(p)] and Soren [EA(p) · GW(p)] mentioned it.]
Insights concerning how we should measure and think about how "effective" a charity is.
There's a lot written on this. Perhaps see 80,000 Hours' How to compare different global problems in terms of impact. Note, again, that this is not restricted to charities, but is about problems/causes.
An overview of the world's biggest problems (according to the EA movement) or maybe the problems with the best ratio of marginal improvement to marginal effort.
A while ago I compiled master list of all existing lists of important problems; you can find it here.
Some recommended blog posts
Scott Alexander Ethics offsets
Scott Alexander Nobody is perfect, everything is commensurable
Scott Alexander No time like the present for AI safety work
Nick Beckstead A proposed adjustment to the astronomical waste argument
Nick Bostrom 3 ways to advance science
Paul Christiano An estimate of the expected influence of becoming a politician
Paul Christiano Astronomical waste
Paul Christiano Hyperbolic growth
Paul Christiano Influencing the far future
Paul Christiano Neglectedness and impact
Paul Christiano On redistribution
Paul Christiano Replaceability
Paul Christiano The best reason to give later
Paul Christiano The efficiency of modern philanthropy
Paul Christiano Three impacts of machine intelligence
Owen Cotton-Barratt How valuable is movement growth?
Holly Elmore The remembering self needs to get real about the experiencing self
Holly Elmore Humility
Ben Garfinkel How sure are we about this AI stuff? [EA · GW]
Katja Grace Cause Prioritization Research
Katja Grace Estimation Is the Best We Have
Robin Hanson Marginal charity
Robin Hanson Parable of the multiplier hole
Holden Karnofsky Hits-Based Giving
Holden Karnofsky Passive vs. rational vs. quantified
Holden Karnofsky Sequence thinkings vs. cluster thinking
Holden Karnofsky Your Dollar Goes Further Overseas
Holden Karnofsky Worldview diversification
Jeff Kaufman Altruism isn’t about sacrifice
Jeff Kaufman The Unintuitive Power Laws of Giving
Greg Lewis Beware Surprising and Suspicious Convergence
Will MacAskill Are we living at the most influential time in history? [EA · GW]
Richard Ngo Disentangling arguments for the importance of AI safety [LW · GW]
Toby Ord The Moral Imperative Towards Cost-Effectiveness
Carl Shulman Are pain and pleasure equally energy efficient?
Carl Shulman How hard is to become Prime Minister of the United Kingdom
Carl Shulman Flow-through effects of saving a life through the ages on life-years lived
Carl Shulman & Nick Beckstead A Long-run Perspective on Strategic Cause Selection and Philanthropy
Jonah Sinick Many Weak Arguments vs. One Relatively Strong Argument
Scott Siskind Dead children currency
Scott Siskind Efficient charity
Brian Tomasik Charity Cost Effectiveness in an Uncertain World
Brian Tomasik Risks of Astronomical Future Suffering
Julia Wise Cheerfully
↑ comment by Evan Rysdam ·
2019-09-29T20:26:13.351Z · EA(p) · GW(p)
Holy crap, this is even more than I'd dared to hope for! I'm particularly excited to see your list of lists of important problems, both because I was dreading trying to figure out which problems were important myself and because the existence of all those lists is a good sign about the number of altruistic people in the world.
I'll read through these posts over the next couple of days/weeks. Thank you so much!
Comments sorted by top scores.