What posts you are planning on writing?

post by vaidehi_agarwalla, JamesFaville · 2019-07-24T05:12:23.069Z · EA · GW · 99 comments

This is a question post.

James Faville and I think that it would be valuable for people to get feedback on posts they are planning on writing, in particular in getting an idea of what others would be most excited to read.

We think this will accomplish a few things:

1. Encourage people to publish the posts

2. Help them prioritize between post ideas based on community feedback

3. Get directed to useful readings/resources

4. (For everyone) Get a sense of what the community is working on

Edit: If you'd like community feedback on a post, there is an EA Editing and Review facebook group.


answer by Aaron Gertler (aarongertler) · 2019-07-24T06:23:51.261Z · EA(p) · GW(p)

"Examples of good EA hiring practices":

A list of good things I've seen various EA orgs do in their hiring processes (in the process of applying to at least seven of them). Meant as inspiration for other organizations; I'd hope that it would get lots of additional material from commenters who have also applied for EA jobs.

answer by Aaron Gertler (aarongertler) · 2019-07-24T06:26:01.892Z · EA(p) · GW(p)

"The EA Doldrums: Drifting for no good reason"

A piece exploring why it took me so long to go from "leader of moderately successful student group" to "actually applying for jobs in EA", and speculating that there may be a lot of other people who aren't aware of how qualified they actually are for direct work (with reference to at least one more anecdotal example of someone who was in the "doldrums" for a while). Includes thoughts on what kinds of prompting might actually get people in these positions to take EA jobs seriously.

comment by saulius · 2019-07-25T10:50:43.902Z · EA(p) · GW(p)

I feel I should note that there is an opposite problem happening as well. Robert Wiblin once wrote:

It's a problem for 80,000 Hours that people range from wildly overconfident in themselves to wildly under-confident in themselves. The extent of people's inaccurate self-assessments has surprised me and might surprise you too.

As a result, almost anything we say to help people figure out whether they can plausibly pursue a given career path will still lead to some combination of confident but unsuitable people pushing ahead, and under-confident but suitable people not even bothering to try. Both of these are significant costs.

The ideal is to give objective measures like test scores, but i) many roles have no such clear entry criteria, ii) even those that do usually also require some softer skills that are harder to measure, iii) most people won't have done the test, so we're back to people's guesses about how well they would do, and iv) some people have such strong positive and negative convictions about themselves even this wouldn't help.

Anyway, the bottom line is that if you could all go and achieve perfect self-knowledge it would make my job slightly easier, thank you.

Replies from: aarongertler, Stefan_Schubert
comment by Aaron Gertler (aarongertler) · 2019-07-26T05:00:38.620Z · EA(p) · GW(p)

There are certainly people on both ends of the (confidence / ability) spectrum. I suspect that "skilled people deciding not to try entering EA work" is a bigger problem than "people trying to push ahead when they shouldn't".


  • From an individual's perspective, "wasting time trying to enter a field" doesn't seem much worse than "missing your chance to enter a field where you'd have had a much higher impact than you did otherwise".
  • From an org's perspective, it's much more costly to miss out on a great employee than to say "no" to one more person.

But there are a lot of other ways you could look at the issue, and this is just my first impression.

comment by Stefan_Schubert · 2019-07-26T13:39:00.502Z · EA(p) · GW(p)

Generally, I would expect more people to overestimate themselves (illusory superiority) than underestimate themselves. I also expect that there is a social desirability bias at play here: it's more socially acceptable to point out that people underestimate themselves, than that they overestimate themselves.

comment by tommycrow · 2020-02-06T23:13:31.866Z · EA(p) · GW(p)

Did you ever write this? I'd love to read it.

Unsolicited advice-seeking (respond to all, some, or none, as your schedule and interests permit): Is being the "leader of a moderately successful student group" in itself a useful qualification for getting EA jobs? And if so, where do you find openings where it's relevant? (I'm the leader of a moderately successful student group! :D) I just finished a bachelors in economics and my very preliminary search of EA-adjacent job postings has turned up a lot of opportunities for grad students, phd's, or programmers, of which I am none. (Fwiw I might actually be overestimating my qualifications, given that I can't code and my only significant paid job experience is tutoring.)

Replies from: Linch, aarongertler
comment by Linch · 2020-02-07T03:45:33.752Z · EA(p) · GW(p)

FWIW, I think tutoring EAs can be a valuable intervention, though maybe won't ever be big enough for an org (or possibly even a single person) to work on this full-time.

Replies from: jpaddison
comment by JP Addison (jpaddison) · 2020-02-07T19:08:57.284Z · EA(p) · GW(p)

Now on a massive tangent, but maybe you could offer to subsidize people buying tutoring from Wyzant?

comment by Aaron Gertler (aarongertler) · 2020-02-07T10:13:35.331Z · EA(p) · GW(p)

Didn't write it, but have two-thirds of a draft lying around to finish someday.

Leading a group is a good signal, but for most jobs, I think other qualifications will also be important (though these could include "having a strong application and doing well on work tests"). If you're trying to do something that makes use of your econ knowledge (rather than your ops/organizing ability or general research skills), competing with PhDs will be tough.

I'm an unusual case, because I went to a one-off retreat for people interested in ops work at a time lots of orgs were hiring at once -- it was a bit like a "job fair". Had I not gone there, I'd have just kept checking the 80K job board, the "Effective Altruism Job Postings" Facebook group, and the websites of a few orgs I liked (if I'd seen that their jobs weren't being added to the board).

answer by vaidehi_agarwalla · 2019-07-23T00:42:36.484Z · EA(p) · GW(p)

(in no particular order)

1. The application of social movement theory to EA group building

a. The tensions between a member-organising movement (grassroots) and a centrally organised (top down) movement (early draft)

b. historical case studies of movement building to learn from (brainstorming - environmental movement)

2. Ideas to improve the presence of EA in developing countries and non-EA Hubs (editing stage)

3. Climate Change and EA

a. A research agenda for EA and climate change (early draft)

b. How to make room for climate change research in the EA movement (editing stage)

4. Career Change Resources in the EA Community Research project (research stage)

comment by casebash · 2019-07-24T08:59:07.339Z · EA(p) · GW(p)

Wow, they all sound so fascinating!

comment by Aaron Gertler (aarongertler) · 2020-03-24T06:07:32.589Z · EA(p) · GW(p)

Checking back on this thread now that everyone's spending more time cooped up inside :-/

Have you made progress on any of these ideas? I'd be happy to help [EA · GW]!

Replies from: vaidehi_agarwalla
comment by vaidehi_agarwalla · 2020-03-24T11:35:16.434Z · EA(p) · GW(p)

Thanks for checking Aaron! I've been meaning to update this thread.

1a) I came very close to publishing this in November, but realised it needed a lot more work to be readable and ended up splitting the post into 3 to make it more readable. I've been prioritising other projects, aim to publish by April 2020. 1b) I have a bunch of interesting papers collected but haven't made progress yet. Will likely start after 1a)

  1. I wrote and never published this because:
  • I think it was too generalized and overly simplistic
  • I think some of the things I wrote were likely wrong/inaccurate
  • I felt the most effective way to help developing EA presence was assisting existing projects [EA · GW] and direct work.
  • Why writing the post was still valuable:
    • Helped me clarify my own theories of movement building
    • Ended up witing a few other posts to explain some of my assumptions
    • I've shared it with others trying to answer these questions

3a) This became a much more ambitious and comprehensive volunteer project, but it also means that progress has been slow and incremental. I plan on writing a post about how the project failed and lessons learnt (but I'm experimenting with some new ways to make progress on this and want to see the results first).

b) This post is written, but i didn't see the value of posting another call for climate change on the forum since, as with 2), I updated towards doing direct work to make progress on this space. (I'd be curious to hear if you think there's still value in posting such a post)

We now have an Effective Environmentalism directory and have started weekly calls on different EE related topics on facebook. Would be curious to hear your thoughts on this.

  1. I created an (almost) comprehensive Effective Environmentalism Resources page. Some of us are now working on a more user-friendly introductory resource for non-EAs.
Replies from: MichaelA
comment by MichaelA · 2020-04-06T07:25:44.386Z · EA(p) · GW(p)

My two cents: I can understand why you'd want to not post 2, if you believe it had those issues. But it seems like, if 3b is already written, it might as well be posted, unless you think it's fundamentally mistaken. If you just think that EA climate change research is a less valuable approach than you used to, then maybe you could slap some extra caveats and updates at the top. It could still potentially serve as some useful thoughts for people who do pursue that approach, or serve as an explanation of why you think that approach isn't that valuable, or that sort of thing.

I'm not personally very focused on climate change, and don't think I'd personally read the post. But I have a general sense that posts that are just "maybe not very novel or useful" still might as well be posted, once the effort has gone into writing them. It seems like they may at least be appreciated in some way by some niche audience, or suggest to others that that topic isn't worth them writing about. And worst case scenario is usually just they don't get read much, or slightly waste a few people's time.

This doesn't apply to posts that are so incorrect they'd leave people with worse beliefs, or posts that pose information hazards [LW · GW], but it didn't sound like you thought those things were true of 3b?

answer by Derek · 2019-07-24T19:48:57.457Z · EA(p) · GW(p)

"Health and happiness: some open research topics"

This has been 90% complete for >6 months but finishing it has never seemed the top priority. The draft summary is below, and I can share the drafts with interested people, e.g. those looking for a thesis topic.


While studying health economics and working on the 2019 Global Happiness and Wellbeing Policy Report, I accumulated a list of research gaps within these fields. Most are related to the use of subjective wellbeing (SWB) as the measure of utility in the evaluation of health interventions and the quantification of the burden of disease, but many are relevant to cause prioritisation more generally.

This series of posts outlines some of these topics, and discusses ways they could be tackled. Some of them could potentially be addressed by non-profits, but the majority are probably a better fit for academia. In particular, many would be suitable for undergraduate or master's theses in health economics, public health, psychology and maybe straight economics – and some could easily fill up an entire PhD, or even constitute a new research programme.

The topics are divided into three broad themes, each of which receives its own post.

Part 1: Theory

The first part focuses on three fundamental issues that must be addressed before the quality-adjusted life-year (QALY) and the disability-adjusted life-year (DALY) can be derived from SWB measures, which would effectively create a wellbeing-adjusted life-year (WELBY).

Topic 1: Reweighting the QALY and DALY using SWB

Topic 2: Anchoring SWB measures to the QALY/DALY scale

Topic 3: Valuing states 'worse than dead’

Part 2: Application

Assuming the technical and theoretical hurdles can be overcome, this section considers four potential applications of a WELBY-style metric.

Topic 4: Re-estimating the global burden of disease based on SWB

Topic 5: Re-estimating disease control priorities based on SWB

Topic 6: Estimating SWB-based cost-effectiveness thresholds

Topic 7: Comparing human and animal wellbeing

Parts 1 and 2 include a brief assessment of each topic in terms of importance, tractability and neglectedness. I'm pretty sceptical of the ITN framework, especially as applied to solutions rather than problems, and I haven't tried to give numerical scores to each criterion, but I found it useful for highlighting caveats. Overall, I'm fairly confident that these topics are neglected, but I'm not making any great claims about their tractability, importance or overall priority relative to other areas of global health/development, let alone compared to issues in other cause areas. It would take much more time than I have at the moment to make that kind of judgement.

Part 3: Challenges

The final section highlights some additional questions that require answering before the case for a wellbeing approach can be considered proven. These are not discussed in as much detail and no ITN assessment is provided (the Roman numerals reinforce their distinction from the main topics addressed in Parts 1 and 2).

(i) Don’t QALYs and DALYs have to be derived from preferences?

(ii) In any case, shouldn’t we focus on improving preference-based methods?

(iii) Should the priority be reforming the QALY rather than the DALY?

(iv) Are answers to SWB questions really interpersonally comparable?

(v) Which SWB self-report measure is best?

(vi) Whose wellbeing is actually measured by self-reported SWB scales?

(vii) Whose wellbeing should be measured?

(viii) How feasible is it to obtain the required data?

(ix) Are more objective measures of SWB viable yet?

Part 3 also concludes the series by considering the general pros and cons of working on outcome metrics.

comment by Aaron Gertler (aarongertler) · 2020-03-24T06:10:33.918Z · EA(p) · GW(p)

I know it's been a while since you posted this, but if you still hope to post it someday, and if there's anything I can do to help with the last 10%, please let me know [EA · GW]!

(With everyone cooped up inside, I figured this might be a good chance for folks to get to the writing projects they thought they'd never have time for, though of course not everyone has become less busy as a result of the pandemic.)

Replies from: Derek
comment by Derek · 2020-03-26T18:39:32.875Z · EA(p) · GW(p)

Hah! I was working on them before getting sidelined with covid stuff.

I can send you the drafts if you send me a PM. The content is >80% done (I've decided to add more, so the % complete has dropped) but they need reorganising into ~10 manageable posts rather than 3 massive ones.

comment by bfinn · 2020-03-26T15:49:53.737Z · EA(p) · GW(p)

These are important topics IMO.

answer by Lukas_Gloor · 2019-07-24T12:00:06.605Z · EA(p) · GW(p)

A sequence on moral anti-realism and its implications

I published the first [EA · GW]post "What is moral realism? [EA · GW]" last year and have about five half-finished drafts stored somewhere, but then I got sidetracked massively. Tentative titles were:

1. What is moral realism? [published]

2. Against irreducible normativity

3. Is there a wager for moral realism?

4. Metaethical fanaticism (dialogue about the strange implications of an infinite "moral realism wager")

5. [Untitled – something about "People aren't born consequentialists; people live their lives in different modes; vocations are not just discovered but also chosen"]

6. Introspection-based moral realism

7. Why I'm a moral anti-realist (sequence summary)

8. Anti-realism is not nihilistic

9. Anti-realism: What changes?

  • Less bullet biting?
  • Treating peer disagreements about values differently
  • Moral uncertainty vs. moral underdetermination

I might find some time later this year to finish more of the posts, but I'm not sure I still want to do the entire sequence. I considered just skipping to posts 7. - 9. because that used to be my original plan, but then the project somehow took on a much larger scale. I'd be curious to what degree there's interest on the following topics:

(a) What are the arguments against (various angles of) moral realism?

(b) What is it that people are even doing when they do moral philosophy?

(c) What do anti-realists think they're doing; why do they care?

(d) Implications for moral reasoning if anti-realism is correct

comment by Aaron Gertler (aarongertler) · 2020-03-24T06:14:32.403Z · EA(p) · GW(p)

What's the status of this project? Even if you no longer plan to publish most of these posts, I suspect that some people would be interested in seeing even very rough versions of the material, and I'd be happy to look over [EA · GW] anything you weren't sure about posting!

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2020-03-24T14:23:17.939Z · EA(p) · GW(p)

I started working on them in December. The virus infected my attention, but I'm back working on the posts now. I have two new ones fully finished. I will publish them once I have four new ones. (If anyone is particularly curious about the topic and would like to give feedback on drafts, feel free to get in touch!)

Replies from: MichaelA
comment by MichaelA · 2020-04-06T07:44:17.075Z · EA(p) · GW(p)

Great to hear you're still planning to write these!

I currently assign very high credence to anti-realism, but:

  • I don't really know what I mean by that
  • I (at least believe I) basically act as if moral realism is true, due to:
    • "wager"-style reasoning (but I don't know if it makes sense to do that)
    • not feeling I get why to care if anti-realism is "correct"
  • I don't really know if I'd actually act differently if I decided to "act as if an antirealist"

So all the tentative titles and four topics you listed sound very interesting to me, and like things I've wanted to write about but doubt I'll get around to (partly because I lack the relevant background).

answer by saulius · 2019-07-25T14:59:45.185Z · EA(p) · GW(p)

I included links to my working drafts to help understand the projects better, but please keep in mind that they contain statements that I will change my mind on after further research or contemplation. Also, they are not very tidy.

Year-by-year analysis of corporate campaigns (~50% done, draft)

This is basically an appendix to my cost-effectiveness estimate of corporate cage-free and broiler campaigns [EA · GW]. Will contain graphs that will show how many animals were affected by campaigns each year, how cost-effectiveness has changed, and why we shouldn’t overreact to the analysis.

Numbers of animals slaughtered (~40% done, draft)

A collection of estimates of how many animals are kept in captivity for various purposes. E.g., meat, fur, wool, experiments, zoos, fish stocking, silk, etc.

Numbers of wild animals affected by humans in various ways (~30% done, draft)

Another collection of estimates. E.g. how many wild fish we catch, how many animals are killed by domestic cats, how many birds die after colliding with man-made objects, etc.

Surveys about veg*ism in the U.S. (not started)

I previously examined surveys about veganism and vegetarianism in the U.S. here. Results were conflicting. Now I want to conduct my own surveys to try to figure out what’s happening. This SSC post provides a hypothesis about why 2-6% of people claim to be vegetarians in surveys but then >60% of them report eating meat on at least one of two days for which they were asked to fill a dietary recall survey. I want to test it by seeing how many people will claim that they eat a breatharian diet (eat no solids at all). I think that ~3% of people will claim that they do it because they answer questions without reading, or purposefully answer incorrectly, or misunderstand the question. This would explain why surveys that simply ask people “Are you a vegan?” find such unreasonably high percentages. I also want to test other survey designs in a similar way and then make a better survey on the subject.

Trends of vegetarianism and veganism in the UK (not started)

Similar to what I wrote for the U.S. (link) but for the UK. I want to see if there will be similar patterns.

comment by saulius · 2019-07-25T15:02:55.264Z · EA(p) · GW(p)

Relatedly, I put some of my posts that I decided are not good enough to go on the EA Forum on a wordpress site here (I’ve never advertised this website before).

Replies from: aarongertler, anonymous_ea
comment by Aaron Gertler (aarongertler) · 2019-07-26T04:57:05.304Z · EA(p) · GW(p)

I strongly recommend you add more of these posts to the Forum -- in particular, I really like the post on ways that cost-effectiveness estimates can be misleading.

Replies from: saulius, Milan_Griffes
comment by saulius · 2019-07-26T15:40:16.972Z · EA(p) · GW(p)

Thanks. I think I'm afraid to publish posts if I'm unsure they are good/useful. But I will consider publishing some these, especially ways that cost-effectiveness estimates can be misleading.

Replies from: MichaelA
comment by MichaelA · 2020-04-06T07:48:46.231Z · EA(p) · GW(p)

[8 months on] ...well, that went very well, haha. I believe it's now got the 8th most karma on the forum.

Has this updated you to being more willing to post on the forum?

Also, for ones that you're still not sure are worth posting, have you considered posting them as shortforms?

Replies from: saulius
comment by saulius · 2020-04-06T18:34:31.670Z · EA(p) · GW(p)

Yes, it made me a bit more willing to post here. But I put another week of work into that post before publishing. And I worked 2 more days on that post that I posted a couple of days ago which is also from my blog. I'm sure that some other posts from that blog are worth publishing after I put more work into them but I'm unsure if this is what I should be spending my time on. E.g., I don't want to post Cost-effectiveness of trap-neuter-return programs for cats on the EA forum without doing more to make sure it's correct (e.g. reading recent related research by other EAs). I'm unsure if I want to post Should you donate to a fund-raising meta-charity? without looking into the current situation of these charities (e.g. if there is room for more funding) and just generally thinking more about the topic. I guess it would be fine to still post it with a disclaimer but I would be afraid of giving people the wrong advice and also hurting my credibility. And I don't think posting it on the shortform would make much impact but I'd still care about saying the right things so I don’t want to bother with that.

comment by Milan_Griffes · 2019-07-26T06:46:46.727Z · EA(p) · GW(p)

cf. The Optimizer's Curse & Wrong-Way Reductions [EA · GW]

Replies from: aarongertler
comment by Aaron Gertler (aarongertler) · 2019-07-26T07:03:37.131Z · EA(p) · GW(p)

I found saulius' post useful in different ways than Chris Smith's. I especially like that it covers mistakes that seem more "basic" and easier to avoid/correct for. But "The Optimizer's Curse" is also worth looking at.

comment by anonymous_ea · 2019-07-25T15:19:47.828Z · EA(p) · GW(p)

I just skimmed some of the recent posts on your website and liked them! What makes you think that they're not good enough to be posted here? They definitely seem less comprehensive than some of your (very comprehensive) posts here, but still more than good enough to post here.

comment by Aaron Gertler (aarongertler) · 2020-03-24T06:12:14.411Z · EA(p) · GW(p)

It's been cool to see some of these go up on the Forum since you posted this!

I'd be interested to see the veg*ism survey if you still think you might work on it at some point. And of course, I'm happy to look over drafts [EA · GW] of anything you write if you want feedback.

answer by anonymous_ea · 2019-07-24T21:40:05.486Z · EA(p) · GW(p)

I have two drafts saved with only a few links or a couple of paragraphs written:

1. How do we respond to criticism of EA on the forum?

Several commentators on the forum have recently casually expressed theories of how effective altruists respond to criticism of EA on the forum. Some have expressed skepticism of the idea that EAs can respond positively to criticism of EA. I aim to look at several notable comments and posts on the forum over at least the past several months to see how criticism is practically received on the forum.

My tentative theory, without having properly researched this, is that EAs are generally too eager to read and upvote any nicely written criticism by an intelligent person that sounds non-threatening enough. Criticism of this sort, while often praised, is often not deeply engaged with. On the rare occasion that criticism seems threatening enough to EA, there's deeper engagement with the actual arguments, rather than responses mostly trying to signal-boosting the criticism. There's also one instance of a threatening criticism on a particularly political topic that attracted significantly lower quality comments in my opinion.

The posts I've casually collected so far are:

Benjamin Hoffman's Drowning Children are Rare [EA · GW]

Jeff Kaufman's There's Lots More To Do [EA · GW]

beth's Three Biases That Made Me Believe in AI Risk [EA · GW]

Fods12' Effective Altruism is an Ideology, not (just) a Question [EA · GW]

EAs for Inclusion's Making discussions in EA groups inclusive [EA · GW]

Jessica Taylor's The AI Timelines Scam [EA · GW] (maybe?)

Jessica Taylor's The Act of Charity [EA · GW]

Benjamin Hoffman's Effective Altruism is Self-recommending [EA · GW]

Alexander Guzey's William MacAskill misrepresents much of the evidence underlying his key arguments in "Doing Good Better" [EA · GW]

Chris Smith's The Optimizer's Curse & Wrong-Way Reductions [EA · GW]

Milan Griffes' Cash prizes for the best arguments against psychedelics being an EA cause area [EA · GW] (maybe?)

2. Will I be accepted in EA if I'm not prodigiously successful professionally?

The EA community contains a tremendous amount of extremely talented and accomplished people. I worry that unless I also achieve a lot of professional success, other EAs won't particularly respect me, like me, or particularly want to interact with me. While some of this is definitely related to my own issues about social acceptance, I think there's a decent chance that many other people also feel this way. My aim is to explore my feelings and what about EA makes me feel this way, and encourage others to express how they feel about their place in the community as well. At a meta-level, I hope to at least explore how a different, more feelings focused article might fit in this forum. I don't want to give any specific solutions, imply that this is a problem of any particular magnitude, or even imply that this is necessarily a problem on net for EA.

comment by vaidehi_agarwalla · 2019-07-25T01:28:35.863Z · EA(p) · GW(p)

Both of these posts sound great! I would especially like to see the second one, because there is a lot of outward emphasis on being successful and doing things that signal success (like attending Ivy Leagues).

Replies from: anonymous_ea, Milan_Griffes
comment by anonymous_ea · 2019-07-25T15:05:31.098Z · EA(p) · GW(p)

Thank you! Do you happen to have any advice or feedback on how I'm planning to write it? I'm tempted to make it fairly short and open it up for other people to comment on with their own experiences, but I'm worried that a short, feelings focused post won't get a lot of engagement. Trying to make it more comprehensive by e.g. compiling some of the ways the EA community ends up signaling that it really wants highly talented people (and almost never signals the opposite) might make it more engaging, but would also decrease the likelihood that I'll publish this anytime soon.

I could also pose it slightly differently as a question post on how people feel about their place in the community.

Replies from: vaidehi_agarwalla, Khorton
comment by vaidehi_agarwalla · 2019-07-26T11:36:38.855Z · EA(p) · GW(p)

I agree with khorton - it really depends your goal with the post. If you want to offer support to others who feel the same way, a feelings post is good (including specific examples would be great, and point at broader issues without needing to explicitly research them.

If you want to make a broader point, you could even just make those thoughts in a question and encourage people to share their experiences (I would love to see this!).

Then it could be an informal resource for others feeling that way, and might give you some ideas if you (or someone else) want to write the comprehensive version of the post.

comment by Khorton · 2019-07-25T16:18:06.485Z · EA(p) · GW(p)

You could ask it as a question if your response would be ~3 paragraphs. I think that would work well, but I'm not sure if that would give you enough space to express your feelings.

comment by Milan_Griffes · 2019-07-25T06:25:09.689Z · EA(p) · GW(p)

Thanks for collating these "criticism of EA" posts.

is that EAs are generally too eager to read and upvote any nicely written criticism by an intelligent person that sounds non-threatening enough.

Reminds me a bit of sealioning, though I think what you're pointing to is not exactly that.

Replies from: anonymous_ea, evelynciara
comment by anonymous_ea · 2019-07-25T14:58:04.655Z · EA(p) · GW(p)

Interesting. I hadn't heard of sealioning before. You're right about the thing I'm pointing to being somewhat different. I think EAs want to encourage good criticisms of EA and want to be the kind of people and movement where criticisms are received positively. I think this often leads to EAs being overly generous with criticism posts on an object level, although I don't know whether this is positive or negative on aggregate.

comment by evelynciara · 2020-01-16T17:01:44.349Z · EA(p) · GW(p)

Concern trolling?

answer by poliboni · 2019-07-24T20:00:15.130Z · EA(p) · GW(p)

1. "Survey of arguments for focusing on suffering reduction"
-I'm particularly interested in arguments from and for the nonexistence of positive mental states.

2."The case for studying abroad at Oxford"
-Argue, based on personal experience, that students across the world who are interested in EA should seriously consider studying abroad at Oxford and provide advice on how to make the most of that experience.

3."The case for recruiting for AI safety research in Brazil"
-Lay out the reasons for thinking Brazil is a low hanging fruit for recruiting in AI safety research

comment by SiebeRozendal · 2019-07-27T09:29:09.010Z · EA(p) · GW(p)

Re:2. I hope you're not going to ignore that is really hard to get into Oxford? There's also the general tendency in EA to glorify Ivy League education, which makes a lot of people feel inadequate/excluded.

comment by Aaron Gertler (aarongertler) · 2019-08-13T04:47:19.197Z · EA(p) · GW(p)

I'm especially curious about (2) if you include "spending time in the city of Oxford" and not just "getting into Oxford" (which, as noted below, is hard). I've been looking for posts about what it's like to be part of EA culture in the cities where it is most present (I now live in one of those, but I'm guessing that Oxford differs from Berkeley in many ways).

comment by vaidehi_agarwalla · 2019-07-24T20:16:12.805Z · EA(p) · GW(p)

I would be really interested in hearing the case for 3)!

answer by Aaron Gertler (aarongertler) · 2019-07-24T06:22:22.486Z · EA(p) · GW(p)

"List of public donation logs":

A list of people who have made their donations public. Meant as inspiration for people who might consider doing the same, or information for people who want more perspective on causes they might consider supporting.

comment by saulius · 2019-07-25T11:18:31.709Z · EA(p) · GW(p)

Is it a list of blog posts that explain why people made the donations they made? Or just a list of donors and their donations similar to this?

Replies from: aarongertler
comment by Aaron Gertler (aarongertler) · 2019-07-26T04:55:34.324Z · EA(p) · GW(p)

Closer to Vipul's list. I've spoken to him already as I drafted the idea, and I think it would be helpful to have a more focused list of specifically people who've created their own web pages/spreadsheets to share the information.

My goal is to use the post to show people that it isn't totally unusual to make these things public, and to nudge people closer to making donations public if they were interested but worried about seeming "weird". Part of that is showing others doing it, and part of it is showing different strategies for making these disclosures.

Replies from: MichaelA
comment by MichaelA · 2020-04-06T08:03:41.657Z · EA(p) · GW(p)

1) Data point: Until reading these comments just now, I’d seen that some people had spreadsheets/webpages like these, and I think I vaguely felt that that was good, but I also think I simply hadn’t even considered for a second the idea of doing so myself. (I'd considered writing blogposts about specific or annual planned or prior donations, where I also discussed some of my rationale, but hadn't considered a comprehensive, public spreadsheet/webpage.)

I'm now very likely to do this, as a result of these comments.

2) Do you still plan to make a post collecting these lists?

3) Do you think it would be possible and/or good for there to just be a button on the EA Pledge dashboard for people to opt into making their reported donations from there publicly visible? This may increase the number of people who do this, as it'd be easier and might seem a bit closer to "sort-of a default" than "strange thing these 3 people somewhere have done". I guess one downside would be that, if that button was displayed prominently, it could make the dashboard as a whole seem "weird".

4) Somewhat separately, I like Claire Zabel's statement [EA · GW] that:

EAs often post records of where they donated, sometimes with their reasoning attached. That's great, but it would be better if people posted before they donated, were explicit about whether they wanted feedback, and told others if their writing persuaded us to donate differently. In general, EAs should be vastly more interactive about their donation decision-making process.

(I tried to contribute to this norm with this recent post [EA · GW].)

Do you have any thoughts on how that spreadsheet/webpage approach (or a post about it) could also contribute to or tie in with that norm?

(A fair response to 3 and 4 could be "Hey Michael, why don't you spend at least 2 minutes of your own damn time thinking about it?" :D)

Replies from: aarongertler
comment by Aaron Gertler (aarongertler) · 2020-04-06T08:23:46.398Z · EA(p) · GW(p)
  1. Wonderful!
  2. Yes, I do plan to do this at some point -- in fact, I've added it as something to do this week thanks to your comment, thanks for the push.
  3. That's an interesting idea. I'll pass it along to CEA's tech team, though I'd guess it wouldn't be something that would happen soon (no guaranteed demand, unlikely to increase people's use of the platform, some risk that people accidentally expose sensitive information).
  4. I'm a fan of Claire's suggestion. Not likely to do it myself, because my reasons for donating are pretty quirky and difficult to explain, but I've liked all the posts of this kind that I've seen from others on the Forum.
answer by Aaron Gertler (aarongertler) · 2019-07-24T06:21:22.793Z · EA(p) · GW(p)

I'm going to list my answers separately for easier upvoting/commentary.

"Effective Altruism 2050: The Grand Story", which explores how people might think about EA in the future, and especially how "credit" might be allocated for whatever we've accomplished.

The thesis of the piece is that most of our current concerns about which kinds of work are high-status or not may fade away over time, to be replaced by a general sense that everyone who did EA-adjacent things was part of the same "story", trying to do their best under conditions of extreme uncertainty.

comment by SiebeRozendal · 2019-07-27T09:24:49.370Z · EA(p) · GW(p)

Hmm.. I would like to see this with caveats or something: EA is far from being sure of success and there are a number of failure modes I can imagine. The risk of this article might be that it would paint an overly optimistic picture of EA. Although I would love to see the description of a best-case scenario!

Replies from: aarongertler
comment by Aaron Gertler (aarongertler) · 2019-07-28T13:24:47.704Z · EA(p) · GW(p)

It seems more likely that not (at least to me) that EA will make only a small dent in history, if it is remembered at all. The post explores what might happen in the timelines where we succeed.

Replies from: SiebeRozendal
comment by SiebeRozendal · 2019-07-30T16:55:43.167Z · EA(p) · GW(p)

Alright that seems cool! I look forward to it. I think plenty of people have dreamed of a best case scenario, but it's definitely good to write that up :)

comment by vaidehi_agarwalla · 2019-07-24T12:26:03.065Z · EA(p) · GW(p)

I would be really interested in seeing this written up. I have many thoughts related to the idea of geting credit (probably not directly related to your post)

I have been thinking a lot about how much of a role high-status plays in influencing people to make decisions, and whether this is always a good thing. For example, many things are highly uncertain but ones endorsed by the community might get a sense of security that even if this doesn't pan out, the person has the support of the community that they did the best thing. Whereas, another cause or intervention might be avoided due to this lack of support.

I also wonder to what extent people take into account being given 'credit' for contributing to a cause or intervention, be it consciously or unconsciously.

Finally, I think post could also raise some interesting questions about the long-term sustainability of EA and its perception to non-EAs, and suggest that tracking as much EA activity as possible now is important if we need to convince people that we made an impact (to combat arguments like "these positive things would have happened anyways/were inevitable"

(Also, perhaps I should have also separated my answers! For next time)

answer by iwaltruist · 2020-02-11T23:33:50.378Z · EA(p) · GW(p)

"EAs within non-EA charities"

A post to explore the following and put a lot more detailed thought, based on my own professional experience of trying to do this for a few months or so, into how it could work...

I work in a large charity in the UK and although I think the work we do is important, it doesn't fit into the highly valuable cause areas commonly accepted by the EA community.

Still, there are lots of reasons that someone like me might continue to work in a less effective job. For example:

  • It's a good employer in your area and you need to stay living around there for caring/family reasons
  • You're building up your skills in an early or new career position
  • You've worked there for ages and only recently discovered EA principles

So skipping part the "go work on a more effective cause" answer, what can people who support EA ideas do in a non-EA charity?

I think there might be crossover with the kind of recommendations you might give to someone work in government, especially when you consider how bound up a lot of UK charities are with public work (Alzheimers Society, Citizens Advice, Church of England, Trussell Trust)

Apart from that I would have thought you could bring over EA principles and play a sort of activist role to make a positive impact when it comes to:

  • prioritising research and product development
  • raising awareness of good impact based decision making within the organisation
  • encouraging a more enlightened view of career development within the organisation
  • sharing and collaborating more generously with the wider social sector
  • in the case of large organisations, doing more to shape the market in terms of what funders aim for when they award grants or commission work

That's all I've got for now but I've actually been able to put some of this into effect, in a fairly modest way, where I work. I wondered if this seems like an interesting topic to explore in more detail?

In particular, assuming that there are people who will stay in a non-EA role but still have some capacity and interest in doing a bit more good by using EA principles, what are the methods/tools/guidelines they can use?

comment by Aaron Gertler (aarongertler) · 2020-03-24T06:34:06.153Z · EA(p) · GW(p)

I'm enthusiastic about seeing EAs do good work in a variety of fields [EA · GW], including those unrelated to standard EA cause areas. I'd be really interested to see you work on this post, and I'd be happy to read over a draft [EA · GW] if you want feedback before you publish.

answer by Milan_Griffes · 2019-07-24T20:26:46.163Z · EA(p) · GW(p)

PSA: the EA Editing and Review facebook group is intended for this use-case. It has 650 members; feedback on posted drafts is generally good.

comment by vaidehi_agarwalla · 2019-07-24T20:42:44.829Z · EA(p) · GW(p)

Thanks! edited the post to include a link to the group.

answer by Aaron Gertler (aarongertler) · 2019-07-24T06:28:02.283Z · EA(p) · GW(p)

"My EA Origin Story":

An attempt to answer the question "why did I become part of the EA movement" in excruciating detail. Would examine every factor I can think of, from the circumstances of my birth to movies I liked as a teenager to the specific set of classes I took in my freshman year of college.

The goal: Get other people to think about what really got them into EA -- not just what happened right before the transition, but all the factors that led to their being ready to accept the ideas. I'd hope to see other people write similar stories (maybe in less detail) after reading mine.

comment by vaidehi_agarwalla · 2019-07-24T15:11:05.792Z · EA(p) · GW(p)

Have you seen this post [EA · GW]? It seems to have done something very similar to what you proposed.

answer by Jess Kinchen Smith · 2019-10-19T16:16:22.335Z · EA(p) · GW(p)

"Possible Edge Cases in Dietary Effects on Animal Welfare"

When I do consume meat, it's 'humanely raised' (grass-fed etc. etc.) or wild-caught. I think the state of the art on the ethics and evidence around these food sources (vs. plausible substitutes) is muddy, and I want to publish my thoughts so someone can help me see things more clearly.

comment by sky · 2019-11-12T04:05:24.334Z · EA(p) · GW(p)

I would personally find this very useful!

answer by Khorton · 2020-02-12T08:45:49.777Z · EA(p) · GW(p)

Thinking of writing a shallow cause profile on lobbying for country-to-country debt relief

comment by Aaron Gertler (aarongertler) · 2020-03-24T06:17:44.501Z · EA(p) · GW(p)

I'd be really interested to see this! It's one of those causes that pops up from time to time in writing by EA-adjacent organizations, but I don't have a sense for what the core numbers even look like (e.g. what debt relief allows countries to accomplish that isn't feasible without debt relief, what the actual cost of relief is to countries that hold debt).

Replies from: Khorton
comment by Khorton · 2020-03-24T08:53:56.827Z · EA(p) · GW(p)

Thanks for commenting! I actually forgot I was meaning to do this... Maybe I'll find some time over the next few weeks!

comment by MichaelA · 2020-04-06T08:34:14.429Z · EA(p) · GW(p)

In case this data point is useful when thinking about what knowledge/views some readers may come to the table with: Pretty much all I currently know about debt relief is some half-remembered arguments from The Dictator's Handbook for why debt relief might be actively bad.

(Not saying these arguments are correct. Also not sure if "country-to-country debt relief" differs in important ways from the type of "debt relief" which that book critiqued.)

answer by Linch · 2020-02-11T01:01:32.722Z · EA(p) · GW(p)

1. Framing issues with the unilateralist's curse.

I'd like to expand this shortform comment [EA(p) · GW(p)] into a more detailed post with slightly better examples, some tentative conclusions, and a clear takeaway for what types of future research would be desirable.

2. A Post on Power Law distributions

Two possible posts here:

A. Power Law Distributions? It's less likely than you think.

a. Basically, lots of EAs arguing that the distribution over {charitable organizations, interventions, people, causes} is ~power law.

b. I claim that this is unlikely. The distribution over most things that matter seem to be a heavy tail distribution that's less extreme than power law.

c. outline here: https://docs.google.com/document/d/17n27ygtUloGrFGqJyOV0Q-yUdGrK5HQoEI-de8lXTy0/edit

d. Unfortunately understanding this well involves some mathematical machinery and a lot of real-world stats that's been somewhat hard for me to make progress on (happy to hand it off to somebody else!)

B. What to do if we live in a power law world

The alternative post is to argue for why if were to take the power law hypothesis about EA-relevant things seriously, we should change our actions dramatically in key ways. I think it might be helpful to start a conversation about this.

3. Thoughts on South Bay EA

I cofounded and co-organized South Bay EA, and had a pretty comprehensive write-up about what futures we should be planning for. My co-organizers and I are still debating between whether to anonymize and share the write-up to benefit future organizers.

4. EA SF tentative plan

Similarly, I've vaguely been thinking of having a public write-up about plans for EA San Francisco so it's easier to a) get feedback through external criticism and b) find collaborators/potential co-organizers online rather than entirely through my network.

comment by vaidehi_agarwalla · 2020-02-11T08:49:40.229Z · EA(p) · GW(p)

I'd be really excited to see 2A written up! Also 3 and 4 (in that order)

comment by MichaelA · 2020-04-06T09:09:34.219Z · EA(p) · GW(p)

I think I'd be interested in 1. Also, I recently collected [EA(p) · GW(p)] all prior work I'd found that seemed substantially relevant to the unilateralist's curse; unfortunately it wasn't much, and you may have seen it all already, but just thought I'd mention it in case it could help you with that post idea.

(I've also added your shortform comment to that list now.)

answer by ishaan · 2019-07-25T22:33:14.417Z · EA(p) · GW(p)

Here's some stuff which I may consider writing when I have more time. The posts are currently too low on the priorities list to work on, but if anyone thinks one of these is especially interesting or valuable, I might prioritize it higher, or work on it a little when I need a break from my current main project. For the most part I'm unlikely to prioritize writing in the near future though because I suspect my opinions are going to rapidly change on a lot of these topics soon (or my view on their usefulness / importance / relevance).

1) Where Does EA take root? The characteristics of geographic regions which have unusually high numbers of effective altruists, with a eye towards guessing which areas might be fertile places to attempt more growth. (Priority 4/10, mostly because I mostly already have the data due to working on another thing, but I'm not sure to which growth is a priority)

2) Systemic Change - What does it mean in concrete terms? How would you accomplish it within an EA framework? How might you begin attempting to quantify your impact? Zooming out from the impact analysis side of things a bit to look at the power structures creating the current conditions, and understanding the "replaceabilty" issues for people who work within the system. (priority 3/10, may move up the priorities list later because I anticipate having more data and relevant experience becoming available soon, but I'm ).

3) A (as far as I know novel) thought experiment meant to complicate utilitarianism, which has produced some very divergent responses when I pose it conversation so far. The intention is to call into question what exactly it is that we suppose ought to be maximized. (priority 3/10)

4) How to turn philosophical intuitions about "happiness", "suffering", "preference", 'hedons" and other subjective phenomenological experiences into something which can be understood within a science/math framework, at least for the purposes of making moral decisions. (priority 3/10)

5) Applying information in posts (3) and (4) to make practical decisions about some moral "edge cases". Edge cases include things like: non-human life, computer algorithms, babies and fetuses, coma, dementia, severe brain damage and congenital abnormalities. (priority 3/10)

6) How are human moral and epistemic foundations formed? If you understand the "No Universally Compelling Arguments" set of concepts, this post is basically helping people apply that principle in practical terms referencing real human minds and cultures, integrating various cultural anthropology and post modernist works. (priority 2/10)

comment by Peter Wildeford (Peter_Hurford) · 2019-07-25T22:44:17.124Z · EA(p) · GW(p)

Where Does EA take root?

You may have seen that we analyzed this a bit as part of the EA Survey [EA · GW]. I'm curious what data source you have?

Replies from: ishaan
comment by ishaan · 2019-07-26T07:57:32.301Z · EA(p) · GW(p)

That very EA survey data, combined with Florida et all The Rise Of The Megaregion data which characterizing the academic/intellectual/economic output of each region. It would be a brief post, the main takeaway is that EA geographic concentration seems associated with a region's prominence in academia, whereas things like economic prominence, population size, etc don't seem to matter much.

comment by Halffull · 2019-07-26T17:25:03.073Z · EA(p) · GW(p)
Systemic Change - What does it mean in concrete terms? How would you accomplish it within an EA framework? How might you begin attempting to quantify your impact? Zooming out from the impact analysis side of things a bit to look at the power structures creating the current conditions, and understanding the "replaceabilty" issues for people who work within the system. (priority 3/10, may move up the priorities list later because I anticipate having more data and relevant experience becoming available soon).

Would be highly interested in this, and a case study showing how to rigorously think about systemic change using systems modeling, root cause analysis, and the like.

answer by Tetraspace Grouping · 2019-07-24T19:24:34.181Z · EA(p) · GW(p)

"How targeted should donation recommendations be" (sorta)

I've noticed that Givewell targets specific programs (e.g. their recommendation), ACE targets whole organisations, and among far future charities you just kinda get promising-sounding cause areas.

I'm interested in what kind of differences between cause areas lead to this, and also whether anything can be done to make more fine-grained evaluations more desirable in practice.

answer by [deleted] · 2021-01-18T16:09:30.124Z · EA(p) · GW(p)

I'm thinking of writing a post about my experience doing an economics PhD with EA motivations. I think this might be interesting to people considering a career in research and especially in social science research, given that this is a career path 80k hours has discussed in the past (e.g. "Economics PhD the only one worth getting?"). I don't have an overarching thesis, so this would be more of a collection of observations -- what it's like, what's good about it, what's bad about it.

answer by Michael Huang · 2021-05-01T16:58:10.952Z · EA(p) · GW(p)

"Genome editing and the replacement, reduction and relief of pain as a cause area"

  • A few individuals lead near-normal lives with the complete absence of pain due to natural genetic variations.
  • Genome editing has the potential to replicate these genetic variations in all animals and people.
  • The problem with eliminating pain is its important role in the detection and avoidance of injury.
  • The challenge is to remove pain while retaining this function. Options include these 3Rs (inspired by the 3Rs of animal testing):
    • Replace pain with a painless sensory system. Complete absence of pain while retaining the detection and avoidance of injury.
    • Reduce the maximum level of pain from 10 to a 1 or 2 on the pain scale. Keep pain but reduce its severity.
    • Relieve pain for those who, out of choice or necessity, have not replaced or reduced pain.
answer by Felix Rudfeldt · 2021-04-30T13:54:41.235Z · EA(p) · GW(p)


I've recently started to write a post about how our education system could be structured to nurture to the full spectrum of health that an indvidual has (physically, emotionally, psychologically, socially, spiritually). I'm thinking about drawing from many different fields of science such as neuroscience, psychology, sports science, sociology, public health (my own field), and education management. 

As you may know, cardiovascular diseases and mental health are on the rise in the west and are becoming pressing problems for our society, which may accumulate in the future if nothing is done to alter the course. 

Let me know what you think and if this is the right place for such a post. 


comment by vaidehi_agarwalla · 2021-04-30T23:02:54.705Z · EA(p) · GW(p)

Hi Felix, I'd personally be very interested in reading such a post !

Things I think might make this more interesting / that are may be typically missing from such evaluations are :

  • what country are you talking about? Why the country?
  • what kind of positive effects for society would these changes produce ? On what timescale?
  • which solutions create the most value or could be prioritised above others? (If we would need to implement multiple changes, why?)
  • are any of these solutions cost effective ? I'd be especially curious on the cost of advocacy, not just implementation costs, especially if the solution requires policies to change
  • who are the current actors in this space and what is their track record? If they haven't been very successful why?
  • is this relevant to any other regions of countries?

These kind of questions would help people understand the scope of the problem, and be more action-relevant if they are interested in the topic!

Replies from: Felix Rudfeldt
comment by Felix Rudfeldt · 2021-05-01T05:37:48.629Z · EA(p) · GW(p)

Hey, Vaidehi! Thanks for your feedback, I hadn't considered these questions before and they are of great help. Do you have any idea on where to find more information on the cost of advocacy and implementation costs? I feel like this is outside my current knowledge.

Replies from: vaidehi_agarwalla
comment by vaidehi_agarwalla · 2021-05-01T15:37:47.732Z · EA(p) · GW(p)

That's a good question, I am not sure of specific resources on advocacy in particular, but highly recommend checking it Charity Entrepreneurship's resources on their idea evaluation process and how they evaluate different interventions. Some of their research reports also cover interventions that include advocacy (e.g. they previously looked into tobacco policy).

It might also be interesting to see about ACE (Animal Charity Evaluators) evaluates their top recommendations, because most of them do advocacy work.

Sorry about lack of links, I'm on mobile. But you can just Google the names of the orgs. If you have any trouble finding info or these aren't that useful let me know!

Replies from: Felix Rudfeldt
comment by Felix Rudfeldt · 2021-05-02T08:47:54.842Z · EA(p) · GW(p)

No worries! I'll be sure to check them out and see if they're relevant to the post I'm thinking about writing. I bet I could also just google randomly lik "advocacy work cost" or something like that to see what comes up. Thanks for your help man! :)

answer by Matt_Lerner · 2020-02-02T20:26:54.715Z · EA(p) · GW(p)

I'm doing a lit review on the effectiveness of lobbying and on some of the relevant theoretical background that I'm planning on posting when I'm done. I feel like this is potentially very relevant but I'm not sure if people will be interested.

comment by weeatquince · 2020-02-03T10:04:59.828Z · EA(p) · GW(p)

Hi, is be interested and have been thinking about similar stuff (meeting the impact of lobbying, etc) from a uk policy perspective.

If helpful happy to chat and share thoughts. Feel free to get in touch to: sam [at] appgfuturegenerations.com

comment by Aaron Gertler (aarongertler) · 2020-03-24T06:18:26.284Z · EA(p) · GW(p)

I'll throw my hat in as someone who would be interested to read this!

comment by MichaelStJules · 2020-02-02T20:59:19.570Z · EA(p) · GW(p)

Consider reaching out to Rethink Priorities, Charity Entrepreneurship and Good Policies (a CE-incubated charity). I think they'd be very interested, given that they're doing similar research (RP on ballot initiatives, CE did some on lobbying for animal welfare and has had interest in lobbying for tobacco taxation). Open Philanthropy Project and the managers of the EA Funds would also probably be interested in your findings.

Replies from: MichaelA
comment by MichaelA · 2020-04-06T08:39:22.980Z · EA(p) · GW(p)

I don't follow their work closely, but I believe the Good Food Institute interact with policymakers on the matter of regulation/labelling of alternative proteins, so perhaps they'd also be interested/have interesting thoughts.

answer by Charlotte (CharlotteSiegmann) · 2020-02-02T19:39:09.688Z · EA(p) · GW(p)

I am planning on writing a post summarizing the existing discussion of information cascades in EA and when doing and the different forms and possibilities to do something against it. Lastly, I discuss why the concept of the information cascade might disadvantageous. I would be interested in comments on the draft.

answer by evelynciara · 2020-01-28T02:03:55.485Z · EA(p) · GW(p)

I'm writing a post about how our discussions of emerging technologies could apply technological determinism or social construction theory more rigorously. For example, we often talk about AI in a way that suggests that it is likely to advance towards superintelligence (technological determinism), but then assert that society has the power to shape the development of AI (social constructivism), given that superintelligence will emerge (determinism again). I think this reasoning is muddled, but I am not suggesting that we must choose either-or between determinism and constructivism.

answer by Nathan Young · 2019-10-16T10:15:00.948Z · EA(p) · GW(p)

An AMA. I honestly don't think I'm a particularly good person to write one, but I think it would be good to have more on here.

I think if you're in an EA job I'd love to see an AMA from you.

answer by manix0011 · 2020-05-14T12:34:24.128Z · EA(p) · GW(p)

i just want to write about Do plants really feel pain? i think it might be a great topic to share here.

answer by alexrjl · 2020-02-07T11:26:52.883Z · EA(p) · GW(p)

Importance, Tractibility and Neglectedness should not have equal weight.

TL;Dr, Neglectedness is a useful tiebreaker and gives you information about tractability but the relatively common matrix approach of scoring possible ideas on ITN and then ranking based on the sum of the scores overweights it.

comment by MichaelStJules · 2020-03-27T00:59:43.705Z · EA(p) · GW(p)

If you're using the formal mathematical definitions of the terms from this section of the 80,000 Hours article, then their product (before taking logs) has an interpretation in natural units, as good done / extra person or $, so if you reweight, this interpretation for the product will be lost. Are you interpreting the ITN terms differently?

Replies from: alexrjl
comment by alexrjl · 2020-03-27T06:29:26.894Z · EA(p) · GW(p)

Yes, or at least I think the way they are often interpreted is different. I actually have no issue with 80k's formal definition, but qualitative use in practice (not by 80k) has often put both both of 80k's last two points in the tractability metric, then there's this other nebulous factor called 'Neglectedness' which ends up being counted again. The key metric is how much good can be done by one marginal extra person or dollar, and I've seen a few cases of people estimating that (which will clearly be affected by diminishing marginal returns), then adding a Neglectedness score on as well, which seems wrong.

I haven't written this up yet as I don't think it's hugely important- it's typically a feature of naïve/rough work, and there's definitely a chance that some of this kind of work is actually using a framework modelled on 80k but just not exposing that well. Most high quality research is just done by an actual CEA rather than by ITN framework, so there's obviously no issue there.

Replies from: MichaelStJules
comment by MichaelStJules · 2020-03-27T22:38:19.741Z · EA(p) · GW(p)

Ok, makes sense!

In case you haven't seen it, this [EA · GW] might be helpful to see what other critiques are out there already.


Comments sorted by top scores.

comment by JP Addison (jpaddison) · 2020-02-11T23:27:27.112Z · EA(p) · GW(p)

This post should win an award for how long it's had active comments coming in

comment by asemakula · 2019-12-02T05:45:29.220Z · EA(p) · GW(p)

This is really a nice approach as am stuck and needed some help on an article/project am working on. Here it is,

I have experienced and seen from others the frustration grassroots change-makers go through before they get disoriented and let the ills in society and environment go on unabated.

I started thinking of a better model to push funds to change-makers so they concentrate more on grassroots impact and less on pulling funds.

The first approach that came to mind was reducing the 'cost of sacrifice' to zero so that millions of altruists become philanthropists giving purchase decisions where a small x% goes to a grassroots cause they care about. - and latest consumer research supports this model "91% would switch brands for one championing a cause." Deloitte Global Millennial Survey 2019

But before I could test it, and as thought and read further, I discovered that actually society and planet invest in value creation but a bug in the markets makes sure they get a bounced check during wealth sharing. And that we can use technology, AI and others to rally consumers to reclaim planet and society's wealth shares. Then the socially interested AI can fund change at a thrilling scale.

The article am drafting argues ;

Wealth cannot be created without investment from planet, society, government and businesses yet planet and society hardly get to share the wealth. The PlaSo Diversion bypasses social middlemen (philanthropy & development aid) to ensure 'planet and society' get their just share from the wealth creation process right at the counter.


GOVERNMENTS: provide physical, economic, political, and legal infrastructure.

BUSINESSES: spot, innovate, and invest time/money into a need for a product/service.

PLANET: every consumer product/service has to use some component of earth.

SOCIETY: From a goldmine of knowledge generated over 5,000 years of global-cooperation, cultural civilizations, to markets, etc. without the global-society, businesses would have to start from a prohibitively costly vacuum.


At this point a bug in the markets makes buyers/sellers believe that the only creator of the product/service is the business. It’s so established that even the staunchest inequality activists continue to consume billionaire owned products/services even as they shower slur at them.


Businesses and governments receive their shares and keep planet/society’s shares. And when they need it less, they create philanthropic foundations and aid agencies to distribute remains to planet and society.

That we give businesses wealth incentives to innovate, governments wealth incentive to govern, and deny society wealth incentives to cooperate and planet wealth incentives to sustain us is the mother of all injustices in the world. The current monetization system delivers full ownership and control of wealth to self-interested businesses and governments.

To solve this, we can create decentralised autonomous AI agents running on blockchain, whose self-interest is social-interest to divert planet/society’s shares at the point of monetization (PlaSo-Diversion). The AI agents distribute the wealth to the most urgent, neglected and solvable social/environmental problems. If we can make this the norm, we won’t need philanthropy and aid.

"Philanthropy is commendable, but it should not allow the philanthropists to overlook the very injustice which makes philanthropy necessary." Martin Luther