Posts

Open Thread: August 2021 2021-08-02T10:04:16.302Z
EA Organization Updates: July 2021 2021-07-31T12:20:09.509Z
EA Forum Prize: Winners for April 2021 2021-07-29T01:12:38.937Z
Writing about my job: Content Specialist, CEA 2021-07-19T01:56:14.645Z
You should write about your job 2021-07-19T01:26:59.345Z
Lant Pritchett on the futility of "smart buys" in developing-world education 2021-07-18T23:00:26.556Z
The Effective Altruism Handbook 2021-07-16T21:31:26.921Z
Effective Altruism Polls: A resource that exists 2021-07-10T06:15:12.561Z
The most successful EA podcast of all time: Sam Harris and Will MacAskill (2020) 2021-07-03T21:47:28.540Z
Open Thread: July 2021 2021-07-01T09:16:20.679Z
New Roles in Global Health and Wellbeing (Open Philanthropy) 2021-06-29T19:48:59.625Z
EA Organization Updates: June 2021 2021-06-26T00:37:38.598Z
What are some examples of successful social change? 2021-06-22T22:51:19.955Z
Forum update: New features (June 2021) 2021-06-17T05:01:31.723Z
New? Start here! (Useful links) 2021-06-14T08:07:29.970Z
What are some high-impact paths for a young person in the developing world? 2021-06-14T05:45:15.673Z
What is an example of recent, tangible progress in AI safety research? 2021-06-14T05:29:22.031Z
Open Thread: June 2021 2021-06-03T00:43:21.010Z
Editing Festival: Results and Prizes 2021-05-29T23:16:26.703Z
EA Organization Updates: May 2021 2021-05-26T22:53:12.337Z
[Podcast] Having a successful career with anxiety, depression, and imposter syndrome 2021-05-24T18:39:34.132Z
AMA: Working at the Centre for Effective Altruism 2021-05-22T15:12:58.381Z
EA Forum Prize: Winners for March 2021 2021-05-22T04:34:38.662Z
AMA: Tim Ferriss, Michael Pollan, and Dr. Matthew W. Johnson on psychedelics research and philanthropy 2021-05-13T17:48:50.819Z
Open Thread: May 2021 2021-05-04T08:26:37.503Z
EA Organization Updates: April 2021 2021-04-28T10:52:57.297Z
EA Forum Prize: Winners for February 2021 2021-04-27T09:32:08.732Z
(Closed) Seeking (paid) volunteers to test introductory EA content 2021-04-22T12:10:23.833Z
What material should we cross-post for the Forum's archives? 2021-04-15T10:06:34.186Z
Your World, Better: Global progress for middle schoolers 2021-04-11T21:57:58.572Z
Reality has a surprising amount of detail 2021-04-11T21:41:00.387Z
The EA Forum Editing Festival has begun! 2021-04-07T10:28:10.155Z
Open Thread: April 2021 2021-04-06T19:42:42.886Z
EA Forum Prize: Winners for January 2021 2021-04-02T02:58:43.260Z
EA Organization Updates: March 2021 2021-04-01T02:42:52.558Z
Opportunity for EA orgs: $5k/year in ETH (tech setup required) 2021-03-15T04:11:41.553Z
Progress Open Thread: March 2021 2021-03-03T08:24:27.482Z
Open and Welcome Thread: March 2021 2021-03-03T08:23:53.275Z
Our plans for hosting an EA wiki on the Forum 2021-03-02T12:45:15.469Z
EA Organization Updates: January 2021 2021-02-28T21:30:04.557Z
Running an AMA on the EA Forum 2021-02-18T01:44:31.077Z
EA Forum Prize: Winners for December 2020 2021-02-16T08:46:30.444Z
Allocating Global Aid to Maximize Utility 2021-02-15T06:42:22.410Z
Many (many!) charities are too small to measure their own impact 2021-02-15T06:39:10.320Z
Progress Open Thread: February 2021 2021-02-02T11:46:47.104Z
Open and Welcome Thread: February 2021 2021-02-02T11:46:08.726Z
80,000 Hours: Where's the best place to volunteer? 2021-01-25T06:57:38.678Z
No, it's not the incentives — it's you 2021-01-25T06:50:01.504Z
EA Organization Updates: December 2020 2021-01-22T11:09:46.097Z
Global Priorities Institute: Research Agenda 2021-01-20T20:09:48.199Z

Comments

Comment by Aaron Gertler (aarongertler) on EA Forum feature suggestion thread · 2021-08-05T00:17:17.136Z · EA · GW

Oy vey, thanks for the notice. Definitely a bug, and one LessWrong is now looking into.

Comment by Aaron Gertler (aarongertler) on Mushroom Thoughts on Existential Risk. No Magic. · 2021-08-05T00:12:21.720Z · EA · GW

Improvements in yield may not ensure food supply longterm.

Modern agriculture has a large carbon footprint and chemical fertilisers are destroying the health of our soils (at the rate of a fertile area of 30 soccer fields/minute). Despite a 700-fold increase in the use of pesticides during the second half of the 20th century, yield remains constant but 20-40% of domesticated plants are still lost to pathogens.

[...]

From a perspective of a longtermist, this case study is an emblematic story of how we often choose to increase yield in the short-term, in exchange for long-term benefits.

Do you have one or two sources you'd recommend that make a strong case that crop yields will plausibly decline on any particular timescale? Particularly if those sources also incorporate factors like potential gains from genetic engineering, potential inefficiencies from increased meat production, etc.?

I see many references to soil health as an ongoing crisis, but I haven't seen any forecasts about actual outcomes / tipping points that stick in my memory. Our World in Data finds gradually increasing yields since mid-century; the trends seem to have flattened a bit recently, but it's hard to tell how much of that is the end of easy gains from Green Revolution vs. active negative trends beginning to take effect.

The EA longtermist community is tiny, and it's easy to imagine even very strong evidence of long-term food supply risk being missed. But even ALLFED, which has a deep focus on related topics, seems to focus almost entirely on risk of crop loss from major disasters rather than ongoing climate change. I'd be interested to know about ALLFED work I've missed, or other good resources that people at places like Open Phil should think about.

Comment by Aaron Gertler (aarongertler) on (Video) How to be a less crappy person · 2021-08-04T23:45:39.351Z · EA · GW

My experience talking to people within animal advocacy is that PETA tends to be seen as more embarrassing than effective — a mishmash of campaigns that end up making the animal movement seem gimmicky, without much in the way of clear impact.

Can a person going with a nice-guy approach really have the same impact as someone being controversial?

Yes, easily!

There are lots of human feelings you can successfully reach aside from "anger" or "(eats popcorn)". Controversial content sometimes sells, but so does other content! 

  • An Inconvenient Truth is a movie about graphs, and it was one of the most successful documentaries of all time. 
  • People like Hans Rosling and Bill Gates have reached enormous audiences with positive messages about the opportunities we have to improve the world. 
  • The most unexpectedly successful EA content ever was a conversation between Sam Harris and Will MacAskill that was deeply sincere and focused on what it means to live a better life (not on scolding people who hadn't taken steps to donate yet)

If you want examples of people who've done a lot for EA-adjacent areas despite not starting with fame or a big platform, and despite being "agreeable" in their styles, you get Tim Urban, Scott Alexander, and Kelsey Piper, among others. Scott's essays are occasionally controversial, but his influence among well-known scientists and entrepreneurs seems more closely linked to his more sober, data-driven work. (His "Fear and Loathing at EA Global" is a great example of how to write about EA as a big, revolutionary concept without having to place it in opposition to anything.)

Comment by Aaron Gertler (aarongertler) on Collective intelligence as infrastructure for reducing broad existential risks · 2021-08-04T22:06:09.675Z · EA · GW

It's not clear to me how usable these sorts of findings of collective intelligence are. Are there many cases of them being incorporated by corporations or similar, and experiencing large gains? Have people in the field of collective intelligence themselves used these ideas to have much more intelligence?

This was my top question after reading the post, as well.

Comment by Aaron Gertler (aarongertler) on Collective intelligence as infrastructure for reducing broad existential risks · 2021-08-04T21:57:39.022Z · EA · GW

The section I found most interesting was on group performance. I notice that the problems mentioned were mostly pretty small:

When asking small groups to perform a wide range of tasks, including brainstorming, sudoku, and unscrambling words, the performance on a subset of the tasks gives a good out-of-sample prediction.

Do you know of any studies where groups were asked to tackle more complex problems or tasks? This is obviously much harder to study, but also seems more relevant to a range of real-world use cases.

*****

Many of the most successful collectives in recent history began as startups (small groups of people running an enterprise together). Discussions of these organizations often highlight the intelligence of individual members, and the literature on startup hiring often emphasizes looking for the smartest/most impressive people. Social perceptiveness gets less attention, but is also harder to see; it's easier to say "Mark Zuckerberg is smart" than to study a bunch of early Facebook meetings. 

On the one hand, I wonder whether this leads to social perceptiveness being underrated. On the other hand, I wonder whether the greater difficulty of studying work on harder/larger-scale problems weighs in favor of social perceptiveness — e.g. if perceptiveness matters more for something like "allocating the group's work between small, simple tasks" than "determining how to approach problems too difficult for any one member to succeed".

(I haven't read the cited studies yet, so maybe these questions would have obvious answers if I did.)

Comment by Aaron Gertler (aarongertler) on Utilitarianism Symbol Design Competition · 2021-08-04T21:05:52.760Z · EA · GW

I see that Wikipedia already has a utilitarian flag created by a philosopher.

Have you spoken to this person about replacing their flag? It seems like yours would be just as unofficial as theirs, so if they care at all about theirs, we may just end up with dueling flags on Wikipedia. Or are you collaborating with people affiliated with other utilitarian projects, like the team at utilitarianism.net?

Comment by Aaron Gertler (aarongertler) on Utilitarianism Symbol Design Competition · 2021-08-04T21:02:12.217Z · EA · GW

I'm confused about why this has (probably) multiple downvotes, and am interested to hear from downvoters.

The same thing happened on the longtermist flag thread, but in that case there was an original design people didn't like. This is just a submission form.

I like the idea of people running contests on the Forum and didn't see a problem with this post.

Did anyone downvote for any of the following reasons?

  1. Utilitarianism is a different thing from EA and I don't like conflating them
  2. Design/art posts don't seem like good Forum content
  3. Contests like this don't seem like good Forum content
  4. Making official symbols for philosophies seems too tribal / identity-driven
  5. I'm confused about who Dan is and why he's taken charge of the official symbol of utilitarianism

Or was it something else?

Comment by Aaron Gertler (aarongertler) on EA Forum feature suggestion thread · 2021-08-04T20:43:30.964Z · EA · GW

It sounds like the "placeholder post" you're seeing is a draft that should be invisible to you, which indicates a different bug.

Is the title you're seeing "Sequence Placeholder Draft", or something else?

Comment by Aaron Gertler (aarongertler) on Please Welcome and Congratulate Our Incoming UC Berkeley EA Chapter President · 2021-08-03T07:48:22.485Z · EA · GW

Please give my best to Dylan!

(I've moved this post to "personal blog" because it seems a bit too local to me; I do the same thing with e.g. announcements for small in-person group events with no virtual component.)

Comment by Aaron Gertler (aarongertler) on Effective altruism quotes · 2021-08-02T18:17:46.250Z · EA · GW

I love these nature shows. I'll watch any kind of nature show. And it's amazing how you can always relate to whatever they're talking about. You know, like, you're watching the African dung beetle and you're going, "Boy, his life is a lot like mine." And you always root for whichever animal is the star of the show that week. Like, if it's the antelope, and there's a lion chasing the antelope, you'll go, "Run, antelope, run! Use your speed. Get away." Then next week, it's the lion and then you're going, "Get the antelope. Eat him. Bite his ass! Trap him. Don't let him use his speed."

Jerry Seinfeld

Comment by Aaron Gertler (aarongertler) on Open Thread: July 2021 · 2021-08-02T10:03:39.870Z · EA · GW

A belated welcome to the Forum! 

You might be interested in Open Philanthropy's grants to organizations working on science (a few of the "Thematic Areas" here seem relevant).

This campaign has also won some support from donors in the community. That page links to some of Let's Fund's other work on improving science (not sure how much will be new to you).

Comment by Aaron Gertler (aarongertler) on The Effective Altruism Handbook · 2021-08-02T09:29:17.994Z · EA · GW

At some point, I'd love to have an ebook version certain content from the Handbook. Right now, it's very much "under construction" (I'm still getting feedback on the content from many people), so that's not an immediate priority. But perhaps creating a PDF with a few of the most important essays would make sense to do sooner (as we did with the very first edition). Thanks for giving me something to consider!

Comment by Aaron Gertler (aarongertler) on Workshops to Improve Institutional Decision-Making in Government · 2021-08-02T09:17:48.774Z · EA · GW

If you find that some of the people you work with actually apply these lessons in their own work, this may be one of the most exciting active projects in the community! I hope you'll keep sharing updates on the Forum.

Comment by Aaron Gertler (aarongertler) on How to reach out to orgs en masse? · 2021-07-29T01:57:57.013Z · EA · GW

I agree with Kevin's comment about creating a post where you explain your experience, and what you want to help with, in more detail. Lots of charity staffers read this website, and many other readers might know of charities they'd want to share your post with.

People who've done this in the past include Simon Panrucker and JSWinchell.

Comment by Aaron Gertler (aarongertler) on vaidehi_agarwalla's Shortform · 2021-07-29T01:47:29.703Z · EA · GW

I think Josh was claiming that 75% was "too low", as in the total % of unpaid hours being more like 90% or something.

When I applied to a bunch of jobs, I was paid for ~30 of the ~80 hours I spent (not counting a long CEA work trial — if you include that, it's more like 80 out of 130 hours). If you average Josh and I, maybe you get back to an average of 75%?

*****

This isn't part of your calculation,  but I wonder what fraction of unique applicants to EA jobs have any connection to the EA community beyond applying for one job?

In my experience trying to hire for one role with ~200 applicants, ~1/3 of them neither had any connection to EA in their resumes nor provided further information in their applications about what drew them to EA. This doesn't mean there wasn't some connection, but a lot of people just seemed to be looking for any job they could find. (The role was more generic than some and required no prior EA experience, so maybe drew a higher fraction of outside applicants.)

Someone having no other connection to the EA community doesn't mean we should ignore the value of their time, and the people who apply to the most jobs are likely to have the strongest connections, so this factor may not be too important, but it could bear consideration for a more in-depth analysis.

Comment by Aaron Gertler (aarongertler) on Database dumps of the EA Forum · 2021-07-29T01:32:04.590Z · EA · GW

We have a GraphQL interface that people can use to scrape Forum data.

We block web crawlers from the All Posts page so that they don't click "load more" a thousand times and slow down the site. But you can use your own crawlers on the page if you're willing to click "Load More" a lot.

Let me know if you have more questions, and I'll make sure they get answered (that is, I'll ask the tech team).

Comment by Aaron Gertler (aarongertler) on Research into people's willingness to change cause *areas*? · 2021-07-28T22:56:55.767Z · EA · GW

I recommend making this comment into a full post, so that more people will see it and have a chance to share feedback!

Comment by Aaron Gertler (aarongertler) on Narration: The case against “EA cause areas” · 2021-07-28T22:50:45.243Z · EA · GW

I'll be a third person here: the narrations are nice, but are starting to clutter the front page.

I'd recommend having one big post where you list all the narrations you've done, with links to the appropriate posts or comments. That post can have the "audio" tag so people find it when they look for audio, and it's a handy way for you to link to the full set of recordings at once if you want people to know about the resource.

Comment by Aaron Gertler (aarongertler) on EA Forum feature suggestion thread · 2021-07-28T21:26:54.917Z · EA · GW

Yes, a tag is removed when its score drops to zero. As long as multiple people haven't all used the job listings tag, it can be removed by the author's downvote. And in a pinch, any admin's strong vote will suffice to drop something below zero even if it has 2-3 votes.

Comment by Aaron Gertler (aarongertler) on Propose and vote on potential EA Wiki entries · 2021-07-28T21:21:15.708Z · EA · GW

The direct democracy tag is meant for investments in creating specific kinds of change through the democratic process. But people are using it for other things now anyway -- probably it's good to have a "ballot initiatives" tag and rename this tag to "democracy" or something else. Good catch!

Comment by Aaron Gertler (aarongertler) on Is impersonal benevolence a virtue? · 2021-07-23T07:59:11.243Z · EA · GW

Thanks for the discussion! I realize that I was mostly explaining my own instincts rather than engaging with Hursthouse, but that's because I find her claims difficult to understand in the context of how to actually live one's life.

Comment by Aaron Gertler (aarongertler) on Open Thread: July 2021 · 2021-07-23T05:58:22.703Z · EA · GW

Hello, Willa! I've activated the Markdown Editor for you — not sure why it wasn't working. (I'm the lead moderator/admin here.)

There's a lot of stuff on the Forum. Perhaps the best way to start out is to browse the top posts of all time and read whatever looks interesting. I don't love all the posts on that list, but they're a reasonable sample of the topics that get a lot of interest here.

The natural book recommendation would be Toby Ord's The Precipice if you haven't read it yet; I've liked the bits I've read, and reviews from outside of EA have been solid.

Comment by Aaron Gertler (aarongertler) on Open Thread: July 2021 · 2021-07-23T05:52:19.951Z · EA · GW

Welcome! And well done to donate 50% — I only know a few people with ordinary jobs who've done this, and they're all among my favorite individuals. You're doing incredible good.

I feel that the Effective Altruism movement overvalues animal well-being vs human well-being.

The EA movement doesn't really have its own values, aside from a few baseline principles — it's a collection of individuals who agree on the principles but differ on many other things. If you were to ask something like "how valuable is saving a chicken from a year of constant suffering?", people in the movement would give you a vast range of answers.

If you think that a particular estimate you've seen from some EA-aligned organization is wrong, the EA Forum is a great place to make that argument!

 I also feel it ignores that improving human welfare is an avenue to improving animal welfare (people who are struggling don't have room to think about whether their chickens are free range).

It seems unlikely on its face that spending money on human welfare will do much for animals, relative to the incredible efficiency of e.g. cage-free campaigns. I don't think animal advocates ignore these side effects (I think almost everyone would agree that there's a link between economic prosperity and moral circle expansion). But I do think that they'd judge the side effects as very minor in the grand scheme of things. 

If you think the side effects aren't minor... sounds like another potential Forum post!

Note that there's been some conversation about the direct opposite idea — that wealthier people eat more animal products, which means that improving human welfare might lead to additional animal suffering (meat consumption has skyrocketed around the world in recent decades).

I haven't seen people actually use this as a reason not to support human-focused charities — again, this "side effect" is very small — but I think it illustrates how difficult and complicated these questions can be.

Comment by Aaron Gertler (aarongertler) on Open Thread: July 2021 · 2021-07-23T05:18:59.253Z · EA · GW

Your instinct that there isn't a "go-to" place for all data is correct. Not sure about GDPR barriers, but it seems likely that a lot of things became unavailable (or were never available) because the people running those projects just got caught up in other things.

Fortunately, we have mechanisms for funding useful projects that no existing person maintains yet. If you're really interested in sorting through everything and making it available, that might be a good candidate for the EA Infrastructure Fund. And if you're busy with other things, you can even propose that they pay someone else to do this!

Comment by Aaron Gertler (aarongertler) on Is impersonal benevolence a virtue? · 2021-07-23T05:15:16.054Z · EA · GW

The best answer here, the one that actually lets us try to live our lives by reasonable ethical principles, seems to me like "morality isn't conflict-free and humans aren't perfectly consistent". The whole point of EA is that standard "ethical" systems often fail to provide useful advice on how to live a good life. No one can be perfectly virtuous or benevolent; all we can do is act well given our circumstances and the options in front of us.

How does this interface with the question of objective morality? You can either say "morality is objective and people are bound to fall short of it", or "morality is subjective and I'm going to do what seems best to me". Either way, as a subjectivist who judges other people through the lens of my own moral opinions, I'm going to judge you by how your actions affect others, rather than by whether they all hang together in a rigorous system.

Comment by Aaron Gertler (aarongertler) on Lant Pritchett on the futility of "smart buys" in developing-world education · 2021-07-23T00:44:42.233Z · EA · GW

Do you know of any studies showing that people in low-income countries regard their own education as a major source of intrinsic value, apart from its effects on other life outcomes?

I ask because I think most people in the developed world value education primarily because it will help them "succeed in life" (or "get a good job", "move up in the world", etc.). If you gave people in the U.S. a choice between e.g. the experience of being in school for 3 extra years, or an extra $5,000/year in salary, I'd expect almost everyone to choose a higher salary. And I would expect sentiments to be similar in the developing world, if not stronger.

I don't have access to data on this, and am generalizing from how I've seen people behave in my own life and in various nonfiction books and articles I've read. I'd be really curious to see data, and you seem like you might be familiar with relevant literature.

Education is one of many things you can do with your time; I don't see why we'd necessarily privilege it over "spending time with your family", "playing with your friends", or other ways to spend time — apart from its effects on economic welfare, health, and so on.

See GiveWell:

There are a limited number of experimental studies providing direct evidence that education interventions improve the outcomes that we consider most important, such as earnings, health, and rates of marriage and fertility among teenage girls. 

They certainly seem to differ from UNDP if UNDP considers education an intrinsic good, absent other effects on welfare. But I'm willing to bite the bullet and ask whether UNDP is actually right. Is education for its own sake important enough to justify pursuing interventions that provide more education, but less health or money, than other interventions?

Comment by Aaron Gertler (aarongertler) on Lant Pritchett on the futility of "smart buys" in developing-world education · 2021-07-23T00:35:28.204Z · EA · GW

Fair comment; I've edited the title and the introduction.

Comment by Aaron Gertler (aarongertler) on How do you communicate that you like to optimise processes without people assuming you like tricks / hacks / shortcuts? · 2021-07-22T23:45:19.983Z · EA · GW

Yes, I recommend both of those things for... well, almost all communication, and this isn't an exception.

Comment by Aaron Gertler (aarongertler) on Writing about my job: pharmaceutical chemist · 2021-07-22T08:34:42.650Z · EA · GW

Thanks for posting this! 

I don't think I saw this mentioned, but do you think you might end up using these skills in a role with a more explicit connection to EA, if an opportunity comes along? I'm no chemist, but I can imagine this kind of expertise being useful for vaccine production (maybe?)

Not that I think this is essential — it sounds like you're living your dream, and that's an extremely good reason to have a job, EA considerations aside. Just curious if that's something you've thought about.

Comment by Aaron Gertler (aarongertler) on Propose and vote on potential EA Wiki entries · 2021-07-22T08:29:23.637Z · EA · GW

Thanks, have created this. (The "Donation writeup" tag is singular, so I felt like this one should also be, but LMK if you think it should be plural.)

Comment by Aaron Gertler (aarongertler) on Is impersonal benevolence a virtue? · 2021-07-22T06:52:49.452Z · EA · GW

Which of the four items on Hursthouse's list do you think are impossible to reject without embracing relativism? And why do you think those ideas are necessarily linked together? 

I may be confused, but I don't see why "ethical naturalism" has to be tied to virtue ethics. It seems wholly consistent to me for people to believe in objective morality, and to believe that this morality is impartial benevolence. It also seems reasonable to believe that if everyone really tried to practice impartial benevolence, we'd end up with a healthy and thriving society. Imagine a small village where all children are communally cared for by adults who love them equally — must this be impossible or unnatural?

There's a chapter of Strangers Drowning which tells the story of Julia Wise and Jeff Kaufman, who have donated hundreds of thousands of dollars to highly effective charities while raising children who seem healthy and happy (they just had their third!). This is very unusual even in rich countries, but is a perfectly reasonable strategy for people who can afford it. (I work with another family doing the same thing, and their children also seem happy and healthy.)

I think this example shows that you can strive to be far more "impersonally benevolent" than most people while still providing for your family, with the result that hundreds of other families live better, happier lives.

Comment by Aaron Gertler (aarongertler) on Lant Pritchett on the futility of "smart buys" in developing-world education · 2021-07-22T06:48:48.462Z · EA · GW

Were these increases typically driven by public demand, or driven by top-down government policy? If the latter, Pritchett's point could still stand.

Comment by Aaron Gertler (aarongertler) on You should write about your job · 2021-07-22T01:33:50.833Z · EA · GW

I'd be extremely interested in this, and it's the highest-karma comment on this thread at the moment!

Comment by Aaron Gertler (aarongertler) on Is impersonal benevolence a virtue? · 2021-07-22T01:28:18.085Z · EA · GW

(Not a philosopher, this is deliberately quick and snappy)

One response is to just deny the naturalist account (why is it required that every "good wolf" or "good person" try to do all four of those things?).

Another is to deny the claim that impersonal benevolence has to contradict being "social animals" or "nurturing young". The average impersonally benevolent person who supports AMF is helping to nurture hundreds of young people, and probably making life "socially better" for entire villages (it seems good for village life when fewer children are sick or dying). If a wolf sacrifices itself for the pack, more wolves survive. Even if Viktor Zhdanov had no children (I don't know whether this is true), he helped the human species thrive. 

Another is to note that effectively zero people actually practice "impersonal benevolence" in the way Hursthouse describes it. If no one actually follows an ethical system, critiques of that system mean little. But if we look at people who try to be unusually impersonally benevolent (however imperfectly), I expect they'll tend to be good parents and good neighbors. In my experience, the habit and practice of being kind to everyone, tends to inculcate kindness toward those close to you as well.

Comment by Aaron Gertler (aarongertler) on Building my Scout Mindset: #1 · 2021-07-22T01:18:22.634Z · EA · GW

Keep in mind that some people choose not to apply our default filter (which removes "personal blog" posts from the front page). The only way to completely hide a post from the front page for everyone is to make it a Shortform post or leave it "unlisted" (only available via link).

However, anyone who chooses not to use the filter is by definition interested in personal-blog-type posts, so I think you're fine!

Comment by Aaron Gertler (aarongertler) on The case against “EA cause areas” · 2021-07-22T00:51:39.248Z · EA · GW

As technicalities noted, it's easy to see the merits of these arguments in general, but harder to see who should actually do things, and what they should do.

To summarize the below:

  • EA orgs already look at a wide range of causes, and the org with most of the money looks at perhaps the widest range of causes
  • Our community is small and well-connected; new causes can get attention and support pretty easily if someone presents a good argument, and there's a strong historical precedent for this
  • People should be welcoming and curious to people from many different backgrounds, and attempts to do more impactful work should be celebrated for many kinds of work
    • If this isn't the case now, people should be better about this

If you have suggestions for what specific orgs or funders should do, I'm interested in hearing them!

*****

To quote myself:

My comments often look like: "When you say that 'EA should do X', which people and organizations in EA are you referring to?"

Open Philanthropy does more funding and research than anyone, and they work in a broad range of areas. Maybe the concrete argument here is that they should develop more shallow investigations into medium-depth investigations?

Rethink Priorities probably does the second-most research among EA orgs, and they also look at a lot of different topics.

Founders Pledge is probably top-five among orgs, and... again, lots of variety.

Past those two organizations, most research orgs in EA have pretty specific areas of focus. Animal Charity Evaluators looks at animal charities. GiveWell looks at global health and development interventions with strong RCT support. If you point ACE at a promising new animal charity to fund, or GiveWell at a new paper showing a cool approach to improving health in the developing world, they'd probably be interested! But they're not likely to move into causes outside their focus areas, which seems reasonable.

After all of this, which organizations are left that actually have "too narrow" a focus? 80,000 Hours? The Future of Humanity Institute?

A possible argument here is that some new org should exist to look for totally new causes; on the other hand, Open Philanthropy already does a lot of this, and if they were willing to fund other people to do more of it, I assume they'd rather hire those people — and they have, in fact, been rapidly expanding their research team.

*****

On your example of cancer: Open Philanthropy gave a $6.5 million grant to cancer research in 2017, lists cancer as one of the areas they support on their "Human Health and Wellbeing" page, and notes it as a plausible focus area in a 2014 report. I'm guessing they've looked at other cancer research projects and found them somewhat less promising than their funding bar. 

Aside from Open Phil, I don't know which people or entities in EA are well-positioned to focus on cancer. It seems like someone would have to encourage existing bio-interested people to focus on cancer instead of biosecurity or neglected tropical diseases, which doesn't seem obviously good.

In the case of a cancer researcher looking for funding from an EA organization, there just aren't many people who have the necessary qualifications to judge their work, because EA is a tiny movement with a lot of young people and few experienced biologists. 

The best way for someone who isn't a very wealthy donor to change this would probably be to write a compelling case for cancer research on the Forum; lots of people read this website, including people with money to spend. Same goes for other causes someone thinks are neglected. 

This path has helped organizations like ALLFED and the Happier Lives Institute get more attention for their novel research agendas, and posts with the "less-discussed causes" tag do pretty well here.

As far as I can tell, we're bottlenecked on convincing arguments that other areas and interventions are worth funding, rather than willingness to consider or fund new areas and interventions for which convincing arguments exist.

*****

Fortunately, there's good historical precedent here: EA is roughly 12 years old, and has a track record of integrating new ideas at a rapid pace. Here's my rough timeline (I'd welcome corrections on this):

  • 2007: GiveWell is founded
  • 2009: Giving What We Can is founded, launching the "EA movement" (though the term "effective altruism" didn't exist yet). The initial focus was overwhelmingly on global development.
  • 2011: The Open Philanthropy Project is founded (as GiveWell Labs). Initial shallow investigations include climate change, in-country migration, and asteroid detection (conducted between 2011 and 2013).
  • 2012: Animal Charity Evaluators is founded.
  • 2013: The Singularity Institute for Artificial Intelligence becomes MIRI
  • 2014: The first EA Survey is run. The most popular orgs people mention as donation targets are (in order) AMF, SCI, GiveDirectly, MIRI, GiveWell, CFAR, Deworm the World, Vegan Outreach,  the Humane League, and 80,000 Hours.
    • To be fair, the numbers look pretty similar for the 2019 survey, though they are dwarfed by donations from Open Phil and other large funders.

Depending on where you count the "starting point", it took between 5 and 7 years to get from "effective giving should exist" to something resembling our present distribution of causes.

In the seven years since, we've seen:

  • The launch of multiple climate-focused charity recommenders (I'd argue that the Clean Air Task Force is now as well-established an "EA charity" as most of the charities GiveWell recommends)
  • The rise of wild animal suffering and AI governance/policy as areas of concern (adding a ton of depth and variety to existing cause areas — it hasn't been that long since "AI" meant MIRI's technical research and "animal advocacy" meant lobbying against factory farming when those things came up in EA)
  • The founding of the Good Food Institute (2016) and alternative protein becoming "a thing"
  • The founding of Charity Entrepreneurship and resultant founding of orgs focused on tobacco taxation, lead abatement, fish welfare, family planning, and other "unusual" causes
  • Open Philanthropy going from a few million dollars in annual grants to in the neighborhood of ~$200 million. Alongside "standard cause area" grants, 2021 grants include $7 million for the Centre for Pesticide Suicide Prevention, $1.5 million for Fair and Just Prosecution, and $0.6 million for Abundant Housing Massachusetts (over two years — but given that the org has a staff of one person right now, I imagine that's a good chunk of their total funding)
  • Three of the ten highest-karma Forum posts of all time (1, 2, 3) discuss cause areas with little existing financial support within EA

I'd hope that all this would also generate a better social environment for people to talk about different types of work — if not, individuals need better habits.

*****

Everyone reasonably familiar with EA knows that AI safety, pandemic preparedness, animal welfare and global poverty are considered EA cause areas, whereas feminism, LGBT rights, wildlife conservation and dental hygiene aren't.

I think that any of these causes could easily get a bunch of interest and support if someone published a single compelling Forum post arguing that putting some amount of funding into an existing organization or intervention would lead to a major increase in welfare.  (Maybe not wildlife conservation, because it seems insanely hard for that to be competitive with farmed animal welfare, but I'm open to having my mind blown.)

Until that post exists (or some other resource written with EA principles in mind), there's not much for a given person in the community to do. Though I do think that individuals should generally try to read more research outside of the EA-sphere, to get a better sense for what's out there.

If someone is reading this and wants to try writing a compelling post about a new area, I'd be psyched to hear about it!

Or, if you aren't sure what area to focus on, but want to embrace the challenge of opening a new conversation, I've got plenty of suggestions for you (starting here).

*****

However, this calculus can be somewhat incomplete, as it doesn’t take into account the personal circumstances of the particular biologist debating her career. What if she’s a very promising cancer researcher (as a result of her existing track record, reputation or professional inclinations) but it’s not entirely clear how she’d do in the space of clean meat? What if she feels an intense inner drive working on cancer (since her mother died of melanoma)? These considerations should factor in when she tries to estimate her expected career-long impact.

I think that very few people in this community would disagree, at least in the example you've put forth.

*****

From my experience, a biologist choosing to spend her career doing cancer research would often feel inferior to other EAs choosing a more EA-stereotypic career such as pandemic preparedness or clean meat. When introducing herself in front of other EAs, she may start with an apology like “What I’m working on isn’t really related to EA”.

What if we tried more actively to let people feel that whatever they want to work on is really fine, and simply tried to support and help them do it better through evidence and reason?

This is where I agree with you, in that I strongly support "letting people feel that what they want to work on is fine" and "not making people feel apologetic about what they do".

But I'm not sure how many people actually feel this way, or whether the way people respond to them actually generates this kind of feeling. My experience is that when people tell me they work on something unusual, I try to say things like "Cool!" and "What's that like?" and "What do you hope to accomplish with that?" and "Have you thought about writing this up on the Forum?" (I don't always succeed, because small talk is an imperfect art, but that's the mindset.)

I'd strongly advocate for other people in social settings also saying things like this. Maybe the most concrete suggestion from here is for EA groups, and orgs that build resources for them, to encourage this more loudly than they do now? I try to be loud, here and in the EA Newsletter, but I'm one person :-(

*****

I think that the EA community should be a big tent for people who want to do a better job of measuring and increasing their impact, no matter what they work on.

I think that EA research should generally examine a wide range of options in a shallow way, before going deeper on more promising options (Open Phil's approach). But EA researchers should look at whatever seems interesting or promising to them, as long as they understand that getting funded to pursue research will probably require presenting strong evidence of impact/promise to a funder.

I think that EA funding should generally be allocated based on the best analysis we can do on the likely impact of different work. But EA funders should fund whatever seems interesting or promising to them, as long as they understand that they'll probably get less impact if they fund something that few other people in the community think is a good funding target. (Value of learning is real, and props to small funders who make grants with a goal of learning more about some area.)

I think that EA advice should try to work out what the person being advised actually wants — is it "have an impactful career in dental hygiene promotion", or "have an impactful career, full stop"? Is it "save kids from cancer", or "save kids, full stop"? 

And I think we should gently nudge people to consider the "full stop" options, because the "follow your passions wherever they go" argument seems more common in the rest of society than it ought to be. Too many people choose a cause or career based on a few random inputs ("I saw a movie about it", "I got into this lab and not that lab", "I needed to pay off my student loans ASAP") without thinking about a wide range of options first.

But in the end, there's nothing wrong with wanting to do a particular thing, and trying to have the most impact you can with the thing you do. This should be encouraged and celebrated, whether or not someone chooses to donate to it.

Comment by Aaron Gertler (aarongertler) on What would you do if you had half a million dollars? · 2021-07-21T23:02:51.701Z · EA · GW

On (1): Have you encouraged any of these people to apply for existing sources of funding within EA? Did any of them do so successfully?

On (3): The most prominent EA-run "major achievement prize" is the Future of Life Award, which has been won by people well outside of EA. That's one way to avoid bad press — and perhaps some extremely impactful people would become more interested in EA as a result of winning a prize? (Though I expect you'd want to target mid-career people, rather than people who have already done their life's work in the style of the FLA.)

Comment by Aaron Gertler (aarongertler) on EA cause areas are just areas where great interventions should be easier to find · 2021-07-21T22:54:30.213Z · EA · GW

Props for writing the post you were thinking about!

Overwhelmingly, the things you think of as "EA cause areas" translate to "areas where people have used common EA principles to evaluate opportunities". And the things you think of as "not in major EA cause areas" are overwhelmingly "areas where people have not tried very hard to evaluate opportunities".

Many of the "haven't tried hard" areas are justifiably ignored, because there are major factors implying there probably aren't great opportunities (very few people are affected, very little harm is done, or progress has been made despite enormous investment from reasonable people, etc.)

But many other areas are ignored because there just... aren't very many people in EA. Maybe 150 people whose job description is something like "full-time researcher", plus another few dozen people doing research internships or summer programs? Compare this to the scale of open questions within well-established areas, and you'll see that we are already overwhelmed. (Plus, many of these researchers aren't very flexible; if you work for Animal Charity Evaluators, Palestine isn't going to be within your purview.)

Fortunately, there's a lot of funding available for people to do impact-focused research, at least in areas with some plausible connection to long-term impact (not sure what's out there for e.g. "new approaches in global development"). It just takes time and skill to put together a good application and develop the basic case for something being promising enough to spend $10k-50k investigating.

I'll follow in your footsteps and say that I want to write a full post about this (the argument that "EA doesn't prioritize X highly enough") sometime in the next few months.

Comment by Aaron Gertler (aarongertler) on How do you communicate that you like to optimise processes without people assuming you like tricks / hacks / shortcuts? · 2021-07-21T22:35:38.919Z · EA · GW

There are terms like "operations", "management", and "logistics" that might stand in for "optimizing processes" (depending on what processes you are talking about).

It might also be helpful to talk about an example of some useful project you or someone else has done right away, so that people don't get the wrong idea. (To give a really trivial toy example, "I recently cut down on the time I spend surfing the web by taking all the blogs I follow and putting them on RSS — I like noticing options like that, where I can improve the way I do something.")

Comment by Aaron Gertler (aarongertler) on Further thoughts on charter cities and effective altruism · 2021-07-21T22:23:05.683Z · EA · GW

I'd caution against equating "lack of support from Open Philanthropy and GiveWell" with "lack of interest from people in the EA movement". There are a tiny number of people who contribute to how those organizations give out funding, and a lot of donors who might be open to a strong argument from a charity with a good track record in a different promising area.

The effective altruism (EA) movement dedicates minimal resources to studying the lessons, let alone attempting to replicate, the greatest poverty alleviation in living memory.

If someone is interested in this topic but doesn't have access to a big pile of money, what would you recommend they do? Is there a list of open problems / research questions available somewhere?

I see the list of jobs on the CCI website — more than I expected! So that's a great start, but I wanted to check if there were things you'd recommend aside from donating money and applying for a role.

Comment by Aaron Gertler (aarongertler) on Thoughts on new DAF legislation? · 2021-07-21T22:10:23.552Z · EA · GW

There's also some Facebook discussion of the Forum post.

Comment by Aaron Gertler (aarongertler) on Types of specification problems in forecasting · 2021-07-21T21:54:39.431Z · EA · GW

This is a nice reference!

When you publish a post like this (explaining a major subtopic, as "specification problems" are for the topic of "forecasting"), I recommend looking at the EA Wiki article for the topic in case you see a chance to update it. 

This could mean adding your post to the bibliography, updating the article text to reference the existence of specification problems, etc.

Comment by Aaron Gertler (aarongertler) on Khorton's Shortform · 2021-07-21T20:49:53.121Z · EA · GW

Terms that seem to have some of the good properties of "EA-aligned" without running into the "assuming your own virtue" problem:

  • "Longtermist" (obviously not synonymous with "EA-aligned", but it accurately describes a subset of orgs within the movement)
  • "Impact-driven" or something like that (indicating a focus on impact without insisting that the focus has led to more impact) 
  • "High-potential" or "promising" (indicating that they're pursuing a cause area that looks good by standard EA lights, without trying to assume success — still a bit self-promotional, though)
  • Actually referring to the literal work being done, e.g. "Malaria prevention org", "Alternative protein company"

...but when you get at the question of what links together orgs that work on malaria, alternative proteins, and longtermist research, I think "EA-aligned" is a more accurate and helpful descriptor than "high-impact".

Comment by Aaron Gertler (aarongertler) on Khorton's Shortform · 2021-07-21T20:44:53.544Z · EA · GW

I have exactly the opposite intuition (which is why I've been using the term "EA-aligned organization" throughout my writing for CEA and probably making it more popular in the process).

"EA-aligned organization" isn't supposed to mean "high-impact organization". It's supposed to mean "organization which has some connection to the EA community through its staff, or being connected to EA funding networks, etc."

This is a useful concept because it's legible in a way impact often isn't. It's easy to tell whether an org has a grant from EA Funds/Open Phil, and while this doesn't guarantee their impact, it does stand in for "some people at the community vouch for their doing interesting work related to EA goals".

I really don't like the term "high-impact organization" because it does the same sneaky work as "effective altruist" (another term I dislike). You're defining yourself as being "good" without anyone getting a chance to push back, and in many cases, there's no obvious way to check whether you're telling the truth.

Consider questions like these:

It seems like there's an important difference between MIRI and SCI on the one hand, Amazon and Sunrise on the other. The first two have a long history of getting support, funding, and interest from people in the EA movement; they've given talks at EA Global. This doesn't necessarily make them most impactful than Amazon and Sunrise, but it does mean that working at one of those orgs puts you in the category of "working at an org endorsed by a bunch of people with common EA values".

*****

The fact that people can say "I do ops at an EA org" and be warmly greeted as high status even if they could do much more good outside EA rubs me the wrong way. 

I hope this doesn't happen very often; I'd prefer that we greet everyone with equal warmth and sincere interest in their work, as long as the work is interesting. Working at an EA-aligned org really shouldn't add much signaling info to the fact that someone has chosen to come to your EA meetup or whatever.

That said, I sympathize with theoretical objections like "how am I supposed to know whether someone would do more good in some other job?" and "I'm genuinely more interested in hearing about someone's work helping to run [insert org] than I would if they worked in finance or something, because I'm familiar with that org and I think it does cool stuff".

Comment by Aaron Gertler (aarongertler) on Lant Pritchett on the futility of "smart buys" in developing-world education · 2021-07-20T07:28:00.304Z · EA · GW

I deliberately chose not to use this as one of my chosen excerpts, though I don't think it reveals a weakness in anything Pritchett believes — I read him as a skeptic about these "rights" who nevertheless acknowledges that other people would rather talk about rights than economic return in discussions of education. But whether he believes in the concept or not, your objection to the concept seems correct to me.

Comment by Aaron Gertler (aarongertler) on Miranda_Zhang's Shortform · 2021-07-19T11:42:48.367Z · EA · GW

My hope is that people who see EA-relevant press will post it here (even in Shortform!). 

I also track a lot of blogs for the EA Newsletter and scan Twitter for any mention of effective altruism, which means I catch a lot of the most directly relevant media. But EA's domain is the entire world, so no one person will catch everything important. That's what the Forum is for :-)

I'm not sure whether you're picturing a project specific to stories about EA or one that covers many other topics. In the case of the former, me and others at CEA know about nearly everything (though we don't have it in a database; no one ever asks). In the case of the latter, the "database" in question would probably just be... Google? I'm having trouble picturing the scenario where an org needs to pull from a list of articles they wouldn't find otherwise. (But I'm open to being convinced!)

Comment by Aaron Gertler (aarongertler) on All Possible Views About Humanity's Future Are Wild · 2021-07-19T02:23:27.993Z · EA · GW

There are lots of places you can read about this. Two of my favorite "starter" posts are:

Comment by Aaron Gertler (aarongertler) on Propose and vote on potential EA Wiki entries · 2021-07-19T02:21:20.418Z · EA · GW

Career profiles (or maybe something like "job posts"?)

Basically, writeups of specific jobs people have, and how to get those jobs. Seems like a useful subset of the "Career Choice" tag to cover posts like "How I got an entry-level role in Congress", and all the posts that people will (hopefully) write in response to this.

Comment by Aaron Gertler (aarongertler) on What posts do you want someone to write? · 2021-07-19T02:19:28.761Z · EA · GW

I want people to write posts about their jobs, and how they got those jobs. I think this will help a lot of people, both with object-level information about getting particular jobs, and by making a meta-level statement that it's not impossible or unrealistic to get a job in EA.

Comment by Aaron Gertler (aarongertler) on You should write about your job · 2021-07-19T01:29:49.463Z · EA · GW

Suggestion: If you're not sure anyone would want to read your job post, reply to this comment and say what your job is, then see how people respond.

Also, consider that we'll have a nice tag for these posts and that, if lots of people write them, the tag will become a resource that could help hundreds of EA-aligned job-seekers.