Posts

Take 80,000 Hours' Annual Impact Survey 2019-09-18T04:26:40.144Z · score: 13 (4 votes)
Book launch: "Effective Altruism: Philosophical Issues" (Oxford) 2019-09-18T00:23:14.849Z · score: 35 (16 votes)
Forum Update: New Features (September 2019) 2019-09-17T08:43:31.904Z · score: 32 (11 votes)
EA Organization Updates: August 2019 2019-09-12T08:54:50.734Z · score: 23 (10 votes)
EA Forum Prize: Winners for July 2019 2019-08-20T07:09:17.771Z · score: 22 (9 votes)
How do you, personally, experience "EA motivation"? 2019-08-16T10:04:18.156Z · score: 31 (14 votes)
EA Organization Updates: July 2019 2019-08-07T13:27:10.778Z · score: 48 (27 votes)
The Unit of Caring: On "fringe" ideas 2019-08-02T03:56:40.650Z · score: 66 (27 votes)
The EA Holiday Calendar 2019-07-30T09:03:32.033Z · score: 21 (11 votes)
William Rathbone: 19th-century effective altruist? 2019-07-30T06:14:12.215Z · score: 15 (8 votes)
EA Forum Prize: Winners for June 2019 2019-07-25T08:36:56.099Z · score: 32 (14 votes)
Editing available for EA Forum drafts 2019-07-24T05:56:20.445Z · score: 86 (39 votes)
EA Forum Prize: Winners for May 2019 2019-07-12T01:48:57.209Z · score: 25 (9 votes)
EA Forum Prize: Winners for April 2019 2019-06-04T00:09:45.687Z · score: 29 (14 votes)
EA Forum: Footnotes are live, and other updates 2019-05-21T00:26:54.713Z · score: 24 (16 votes)
EA Forum Prize: Winners for March 2019 2019-05-07T01:36:59.748Z · score: 45 (18 votes)
Open Thread #45 2019-05-03T21:20:43.340Z · score: 10 (4 votes)
EA Forum Prize: Winners for February 2019 2019-03-29T01:53:02.491Z · score: 46 (18 votes)
Open Thread #44 2019-03-06T09:27:58.701Z · score: 10 (4 votes)
EA Forum Prize: Winners for January 2019 2019-02-22T22:27:50.161Z · score: 30 (16 votes)
The Narrowing Circle (Gwern) 2019-02-11T23:50:45.093Z · score: 36 (16 votes)
What are some lists of open questions in effective altruism? 2019-02-05T02:23:03.345Z · score: 23 (13 votes)
Are there more papers on dung beetles than human extinction? 2019-02-05T02:09:58.568Z · score: 14 (9 votes)
You Should Write a Forum Bio 2019-02-01T03:32:29.453Z · score: 21 (15 votes)
EA Forum Prize: Winners for December 2018 2019-01-30T21:05:05.254Z · score: 46 (27 votes)
The Meetup Cookbook (Fantastic Group Resource) 2019-01-24T01:28:00.600Z · score: 15 (10 votes)
The Global Priorities of the Copenhagen Consensus 2019-01-07T19:53:01.080Z · score: 43 (26 votes)
Forum Update: New Features, Seeking New Moderators 2018-12-20T22:02:46.459Z · score: 23 (13 votes)
What's going on with the new Question feature? 2018-12-20T21:01:21.607Z · score: 10 (4 votes)
EA Forum Prize: Winners for November 2018 2018-12-14T21:33:10.236Z · score: 49 (24 votes)
Literature Review: Why Do People Give Money To Charity? 2018-11-21T04:09:30.271Z · score: 24 (11 votes)
W-Risk and the Technological Wavefront (Nell Watson) 2018-11-11T23:22:24.712Z · score: 8 (8 votes)
Welcome to the New Forum! 2018-11-08T00:06:06.209Z · score: 13 (8 votes)
What's Changing With the New Forum? 2018-11-07T23:09:57.464Z · score: 17 (11 votes)
Book Review: Enlightenment Now, by Steven Pinker 2018-10-21T23:12:43.485Z · score: 18 (11 votes)
On Becoming World-Class 2018-10-19T01:35:18.898Z · score: 20 (12 votes)
EA Concepts: Share Impressions Before Credences 2018-09-18T22:47:13.721Z · score: 9 (6 votes)
EA Concepts: Inside View, Outside View 2018-09-18T22:33:08.618Z · score: 2 (1 votes)
Talking About Effective Altruism At Parties 2017-11-16T20:22:46.114Z · score: 8 (8 votes)
Meetup : Yale Effective Altruists 2014-10-07T02:59:35.605Z · score: 0 (0 votes)

Comments

Comment by aarongertler on Book launch: "Effective Altruism: Philosophical Issues" (Oxford) · 2019-09-19T23:27:00.743Z · score: 4 (2 votes) · EA · GW

Seconded! Most library books are read only a few times; libraries are generally eager to order books if they know they'll have an audience. If you make a request in person, you could mention that your local EA group has multiple people interested in the book, if you are a member of such a group -- I imagine that would be helpful.

Comment by aarongertler on Progress book recommendations · 2019-09-18T08:57:45.731Z · score: 3 (2 votes) · EA · GW

In case you didn't see, this post was featured on Marginal Revolution a while back! (I assume they found it on the Forum rather than your private blog, since they've featured many other Forum posts.)

Comment by aarongertler on Forum Update: New Features (September 2019) · 2019-09-18T00:14:17.439Z · score: 5 (3 votes) · EA · GW

So far, it seems as though people have been treating them like non-intensive posts (e.g. friendly commentary/pushback, no point-by-point criticism). That seems good to me, but my vision of how Shortform should work may change as the feature gets used more often.

Comment by aarongertler on Effective Altruism London Strategy 2019 · 2019-09-17T09:22:14.760Z · score: 2 (1 votes) · EA · GW

Relatedly, we've occasionally tried to think of better alternatives to "optimal world" language on CEA's website. I even tried some timed/targeted brainstorming, and nothing much came out of it. If anyone has suggestions for language that is a bit less utopian but still keeps the "get the best things done" idea, I'd love to hear them!

Comment by aarongertler on New protein alternative produced from CO2 · 2019-09-17T09:07:00.621Z · score: 2 (1 votes) · EA · GW

Stopping by to say that I find it very comforting when the people I hope would have talked to each other about their projects actually did talk to each other about their projects. Three cheers for communicating with do-gooders outside the community!

Comment by aarongertler on How big a problem is misuse of pesticides? · 2019-09-16T21:07:45.829Z · score: 3 (2 votes) · EA · GW

I think you'll get more answers to this question if you give a bit more information at the start. Some points that would be helpful:

  • What are some problems that might result from the misuse of pesticides?
  • Have you read any articles/books about the problem that you could link to?

In general, it's easier for people to respond to ideas/arguments than to come up with opinions from scratch. I may not have a good estimate for "scale of the pesticide issue," but if someone tells me they think pesticides cause X deaths per year because of reasons Y and Z, I can evaluate those claims to see how closely I agree with that person's estimate.

Comment by aarongertler on [Solved] Was my post about things you'd be reluctant to express in front of other EAs manually removed from the front page, and if so, why? · 2019-09-12T22:46:09.513Z · score: 2 (1 votes) · EA · GW

Thanks for clearing that up! I'm glad we don't have to track down a categorization bug :-)

Comment by aarongertler on [Solved] Was my post about things you'd be reluctant to express in front of other EAs manually removed from the front page, and if so, why? · 2019-09-12T19:24:48.277Z · score: 3 (2 votes) · EA · GW

Your post shouldn't have started on the front page, because all posts start out in the "personal blog" category by default, which is only viewable from "All Posts". Some posts are then classified either as "frontpage" (shows up on the front page) or "community" (shows up on the community page). Because your post was about community discussion norms, it was a better fit for the "community" category.

If posts that haven't been categorized are showing up on the front page, something unexpected is happening with our code -- do you happen to have a screenshot of this? Or is it possible that you were viewing the "All Posts" page?

(People with questions about moderation // Forum features are also welcome to PM me on the Forum or email me directly.)

Meta-note: This post will stay in the "personal blog" category, because that's what we generally use for questions with a single "correct" answer, especially if they are questions about how the Forum works. See here for more on categorization.


Comment by aarongertler on Come hang out with EA Princeton! · 2019-09-11T12:23:55.311Z · score: 0 (2 votes) · EA · GW

Note on moderation: For local events in areas outside of major cities, we generally use the "personal blog" category. This means the post will be less visible for most Forum users (for whom it won't be relevant), but still shareable with targeted audiences.

Comment by aarongertler on Movement Collapse Scenarios · 2019-08-31T23:53:29.034Z · score: 11 (7 votes) · EA · GW

I'm glad people want to look for evidence that CEA (and other orgs) is being adequately self-reflective. However, I'd like to give some additional context on Glassdoor. Of the five CEA reviews posted there:

  • Two are from people who have confused CEA with other organizations (neither of those were cited in John's comment)
  • One is fairly recent and positive (also not cited)
  • One is from September 2016, at which point only three of CEA's current staff were employed by the organization (three-and-a-half if you count Owen Cotton-Barratt, who is currently a part-time advisor to CEA).
  • One is from March 2018 -- more recent, but still representing a substantial departure from CEA's current staff list, including a different executive director. A lot can change over the course of 18 months.

I'll refrain from going into too much detail, but my experience is that CEA circa late 2019 is intensely self-reflective; I'm prompted multiple times in the average week to put serious thought into ways we can improve our processes and public communication.

Comment by aarongertler on Our forthcoming AI Safety book · 2019-08-30T23:55:53.297Z · score: 2 (1 votes) · EA · GW

Can you say any more about the circumstances under which your book is being published? What kinds of books does your published normally release, if you are working with one? What audience do you plan to target?

Also, for other users' reference, "EPFL" refers to a bilingual French/English university in Switzerland.

Comment by aarongertler on Why I Am Not a Technocrat · 2019-08-20T07:07:40.709Z · score: 9 (5 votes) · EA · GW

When you write linkposts on the EA Forum, it's good to include a brief explanation of how the content relates to effective altruism and/or which points you thought were most important. This makes it a lot more likely that the post will create discussion. This is a good recent example of a linkpost that does this without much extra work -- just a relevant excerpt and a one-sentence explanation.

For now, I'm leaving this post in the "Personal Blog" category; if you add some more details, I'll move it to the "Community" category.

Comment by aarongertler on Rethink Grants: an evaluation of Donational’s Corporate Ambassador Program · 2019-08-19T10:36:57.076Z · score: 2 (1 votes) · EA · GW
It's plausible that some of these are as cost-effective as the GW top charities, but perhaps not that they are as cost-effective on average, or in expectation.

I agree, for most values of "plausible". Otherwise, it would imply TLYCS is catching many GiveWell-tier charities GiveWell either missed or turned down, which is unlikely given their much smaller research capacity. But all TLYCS charities are in the category "things I could imagine turning out to be worthy of support from donors in EA with particular values, if more evidence arose" (which wouldn't be the case for, say, an art museum).

Comment by aarongertler on Effective altruism and net-positive living · 2019-08-15T13:06:32.458Z · score: 6 (3 votes) · EA · GW

This is really detailed and well-written, and this comment doesn't do the essay justice, but I do want to make a few points:

Before tackling any human-specific problem, there is a more fundamental issue that all other issues need to comply with.

This is a controversial statement, and I'd have liked to see more justification for it. It sounds as though you consider "biocapacity" to be a source of catastrophic risk to humanity -- is that actually the case? If so, what does a "biocapacity catastrophe" look like? Are we doomed to run into dangerous shortages of key resources, even if we make the natural adjustments available to us (e.g. raising the price of scarce materials)? What are some experts' "timelines" for when we might see catastrophic effects from overtaxing the planet?

****

Ethical offsetting has a contentious history in EA; you may want to read that post and its comments, and see whether any critiques (there or on other posts you find) ring true to you.

For your plan in particular, donating to many different charities to offset resource use has complexity costs. You also risk not realizing when a charity's impact gets much lower for some reason (e.g. low-hanging fruit dries up, the charity's mission changes).

What are some advantages to this approach, compared to the more common "donate money to the one or two organizations I think are most effective, without regard for how they do or don't compensate for things I've done"? Do you think it's likely to be personally interesting/compelling to people who wouldn't otherwise have EA inclinations?

****

Regarding Cool Earth: you might find this critique of the organization interesting (it challenges some of the numbers you cited in your post).

Comment by aarongertler on Ask Me Anything! · 2019-08-15T12:40:51.548Z · score: 4 (3 votes) · EA · GW

Given the broad range of topics covered, it's difficult to place this post into a particular category -- some questions are more "Frontpage", others more "Community". Will's answers are likely to include a mix of content that fits into both categories, but is probably a better fit for "Community" overall, because the kinds of questions people wound up asking were mostly related to topics for which we use that category.

Comment by aarongertler on Ask Me Anything! · 2019-08-15T12:38:47.059Z · score: 15 (7 votes) · EA · GW

Rob's FAQ is also my favorite introduction to EA, and I'll be spending some time over the next month thinking about whether there's a good way to blend the style of that introduction with the current EA.org introduction (which is due for an update).

Comment by aarongertler on [Links] Serendipity and discovery, and a tool for making progress on hard questions/problems · 2019-08-15T12:32:27.777Z · score: 4 (2 votes) · EA · GW

What does it actually mean to have "serendipity" as a model in one's toolkit? Would you be open to writing a brief summary of the "how", or do you strongly recommend just watching the video?

Comment by aarongertler on [Link] US Egg Production Data Set · 2019-08-15T12:31:28.695Z · score: 6 (3 votes) · EA · GW
Initial analysis of egg production trends shows that, at the time of publication of this report, 20.3% of all table egg layers lived in cage-free systems. This figure represents an increase of 10.2 percentage points between August 2016 to June 2019, with an increase of 17.1 percentage points over the entire sample period of 2007 to June 2019.

I was happily surprised by the magnitude of this change!

Also, meta-comment: I really like seeing useful data be publicized, even if it's not the sort of thing people will naturally comment on. Your efforts are appreciated.

Comment by aarongertler on Rethink Grants: an evaluation of Donational’s Corporate Ambassador Program · 2019-08-15T03:42:49.989Z · score: 4 (2 votes) · EA · GW
I would consider TLYCS's range very broad, but you may disagree.

TLYCS only endorses 22 charities, all of which work in the developing world on causes that are plausibly cost-effective on the level of some GiveWell interventions (even though evidence is fairly weak on some of them -- I recall GiveWell being more down on Zusha after their last review). This selection only looks broad if your point of comparison is another EA-aligned evaluator like GiveWell, ACE, or Founder's Pledge.

Meanwhile, many charitable giving platforms/evaluators support/endorse a much wider range of nonprofits, most of them based in rich countries. Even looking only at Charity Navigator's perfect scores, you see 60 charities (only 1/4 of which are "international") -- and Charity Navigator's website includes hundreds of other favorable charity profiles. Another example: When I worked at Epic, employees could support more than 100 different charities with the company's money during the annual winter giving drive.

I also imagine that many corporate giving platforms would try to emphasize their vast selection/"the huge number of charities that have partnered with us" -- I'm impressed that Donational was selective from the beginning.

Comment by aarongertler on [Link] Virtue signaling annotated bibliography (Geoffrey Miller) · 2019-08-15T00:14:14.985Z · score: 2 (1 votes) · EA · GW

Moderator note: Leaving this in the "Personal Blog" category, as it is just a list of books without much additional detail.

When I first read the excerpt, I thought Miller's point was that effective altruism was bogged down in virtual signaling. But in context, it seems that he meant something like "the desire to signal virtue drives people away from some of the most effective forms of altruism".

Comment by aarongertler on What posts you are planning on writing? · 2019-08-13T04:47:19.197Z · score: 2 (1 votes) · EA · GW

I'm especially curious about (2) if you include "spending time in the city of Oxford" and not just "getting into Oxford" (which, as noted below, is hard). I've been looking for posts about what it's like to be part of EA culture in the cities where it is most present (I now live in one of those, but I'm guessing that Oxford differs from Berkeley in many ways).

Comment by aarongertler on Age-Weighted Voting · 2019-08-13T04:29:29.591Z · score: 3 (2 votes) · EA · GW

Was this study ever published anywhere? I'd love to put it up on the Forum (or see it posted with a summary from the authors, if you'd be up for it!).

Comment by aarongertler on Do Long-Lived Scientists Hold Back Their Disciplines? · 2019-08-12T18:38:43.874Z · score: 14 (8 votes) · EA · GW

Putting aside the really bad consequences of a world without life extension (people dying all the time, even when they don't want to), how might a world with life-extension technology redefine the meaning of "too long"?

The classic archetype of an "aging star scientist" shows someone getting older and "stuck in their ways", not coming up with brilliant new ideas or collaborating well with younger researchers. But if new technology increases the length of a person's academic career overall, is it not also likely to increase the length of their productive career? To increase their healthspan (intellect included), rather than only their lifespan? Getting more years out of a brilliant mind seems very valuable.

Comment by aarongertler on Could we solve this email mess if we all moved to paid emails? · 2019-08-12T06:47:13.409Z · score: 6 (4 votes) · EA · GW

Do you have any data on the extent of the "email mess", either within the community or in the general space of "people who do a lot of work through their email"? I don't have an intuitive idea for how stressed the average person is by email, much less the average person in EA (we're an unusual group in some ways).

This is the kind of thing that could be a good Effective Altruism Poll, by the way!

Comment by aarongertler on Could we solve this email mess if we all moved to paid emails? · 2019-08-12T06:43:23.291Z · score: 6 (4 votes) · EA · GW

I really like these suggestions. One thing I didn't see, which can be really helpful on the recipient side: Be ready to respond with a "pre-response response".

For example, when I do editing for EA Forum posts, rather than let something sit until I'm ready to read through it, I might respond to the sender saying something like: "I expect to get to this by Friday. If you don't have an email from me by then, you're welcome to follow up and bother me about it."

This lets the other person know I've seen their message and plan to respond. I might also say, if I'm really crunched for time/prioritization: "I don't know whether I'll have time to get to this at all; if you don't hear from me, I'd recommend X", where X might be "emailing someone else", "reading an article", "posting your piece as-is", etc.

Comment by aarongertler on Call for beta testers for the EA Pen Pals project. · 2019-08-12T06:37:14.343Z · score: 3 (2 votes) · EA · GW

If someone who has a deep background in EA wants to participate, but specifically only wants to be matched with someone who does not have a deep background, how should they make this known to the organizers?

For example, I'd be interested in taking part, but I think I'd do much more good answering questions from someone newer to EA than chatting to someone who also works at a Bay Area EA org.

Comment by aarongertler on Are there other events in the UK before/after EAG London? · 2019-08-12T06:34:23.596Z · score: 3 (2 votes) · EA · GW

Meta-comment: This is a perfectly fine question to ask here, though asking separately in Facebook will let you reach some people this post wouldn't reach: Here's the event page for London 2019.

Comment by aarongertler on Rethink Grants: an evaluation of Donational’s Corporate Ambassador Program · 2019-08-06T12:34:24.820Z · score: 3 (2 votes) · EA · GW

Thanks for this reply! I don't have time to engage in much more detail, but I'm now a little more uncertain that my specific qualms with indirect impact are important to the project.

I don't want to make you dig through your notes just to answer my question; I more intended to make the general point that I'd have liked to have a few more concrete facts that I could use to help me weigh Rethink's judgment. (For example, if you shared some current numbers on corporate giving, I could assign my own 'max scale' parameter and check my intuition against yours.)

Knowing that Donational started out with all or almost all TLYCS charities reduces my concern a lot. The impression I had was that they'd been working with a very broad range of charities and were radically cutting back on their selection.

Comment by aarongertler on The Possibility of an Ongoing Moral Catastrophe (Summary) · 2019-08-03T01:22:37.033Z · score: 7 (5 votes) · EA · GW

We absolutely welcome summaries! People getting more ways to access good material is one of the reasons the Forum exists.

That said, did you consider copying the summary into a Forum post, rather than linking it? That's definitely more work, but my impression is that it usually leads to more discussion when people don't have to click away into another page. I don't have strong evidence to back that up, though.

Also: because the title is long and long titles are cut short in some views of the Forum, I'd recommend that summaries of pieces be something like "The Possibility of an Ongoing Moral Catastrophe (Summary)".

Comment by aarongertler on Is running Folding@home / Rosetta@home beneficial? · 2019-08-02T03:44:37.211Z · score: 2 (1 votes) · EA · GW

I certainly don't endorse "always optimize"! I spend far too much time reading manga and trying to win Magic: the Gathering tournaments for that. I fully endorse analyzing things that are interesting/entertaining. But it seems bad to get stuck with something that is both low-expected-impact and low-interest. Someone who really likes Folding@Home should totally give the analysis a go; someone who doesn't care and just wants evaluation practice has many other options.

Comment by aarongertler on Is running Folding@home / Rosetta@home beneficial? · 2019-08-01T23:33:29.629Z · score: 3 (2 votes) · EA · GW

This might be the case, though if someone has the time to analyze a complicated phenomenon and wants to get practice, I think they should take a bit more time to choose a phenomenon to start with, so that they can get one with other useful characteristics. For example, they might try to find something with a larger expected magnitude of impact, positive or negative, or to choose a question that is of direct relevance to the EA community (e.g. something which is an active topic of debate, or involves some very common thing many people in EA do).

Along those lines, I like Gwern's study of melatonin, which involves a bit of self-experimentation but also expected-value calculations. Various other productivity tools/strategies could also be solid candidates.

Comment by aarongertler on Boundaries of Empathy and Their Consequences · 2019-08-01T08:15:02.741Z · score: 4 (3 votes) · EA · GW

(Meta: It's cool that you changed the post in response to feedback!)

I wouldn't recommend trying to hard to shift the views of that single coworker, unless the two of you are close enough friends that you can keep pushing every so often without annoying her too much.

On your question (a), I think the answer is "yes", in the sense that some people will naturally see much greater morally relevant differences between humans and animals than other people. If you take your co-worker at her word, she may literally "not see it that way".

To be frank, I have a very difficult time empathizing with farmed animals, and accept arguments about suffering based on biological evidence (and trust in the rest of the community) rather than innate feelings. If I weren't surrounded by people and research pushing me to care, I don't know how I'd feel now. If pushed, I can articulate reasons why I struggle to empathize despite my rational knowledge, but the reasons will sound silly, and I have to dig to get past my natural apathy and actually find the reasons. Your friend may never have done that kind of digging.

On (b), I very much doubt there are generalizable answers. For every major film about animal welfare, there exist people whose views were transformed by the film and people who watched it and didn't change their views at all. The same is true for every relevant book, every veg*n argument, and so on. People are different along so many different axes that you rarely find a "general" path to persuasion, especially on a cause that entails such a different way of thinking about the world (and acting in the world).

This doesn't mean you have to give up on persuasion, though. My response to developing this view was to take causes I cared about (e.g. the general case for effective altruism) and develop a collection of distinct arguments/frames, so that I could try to shape my persuasion to match any person I spoke with. The same approach might work well for animal advocacy; there are arguments you could use to appeal specifically to libertarians, socialists, pet owners, environmentalists, Christians, and any number of other groups.


Comment by aarongertler on Rethink Grants: an evaluation of Donational’s Corporate Ambassador Program · 2019-08-01T03:25:54.473Z · score: 8 (5 votes) · EA · GW

On the whole: Interesting write-up, certainly works as an intro to how very in-depth use of EA forecasting/impact estimate techniques can be used. I'd read any of these that came out in the future, and can already think of organizations I'd be curious to see evaluated in this way.


This is similar to Jonas's second comment, but it seems like concerns about the indirect harms of economic growth or "the reduction of agency" are both constants in any evaluation of any program in global poverty/development, or which encourages donations to said programs.

Perhaps this indicates that your models could be filled in over time with "default scores" that apply to all projects within a certain area? For example, any program aiming to reduce poverty could get the same "indirect harm" scores as this project's anti-poverty side.

----

Something also feels off about noting the potential harms of effects which are generally very good. I'm having trouble coming up with a formal explanation for some reason, so I'll write this out informally:

If I'm considering funds for a project to reduce poverty and grow the economy, and someone tells me that doing so could increase the number of animals that get eaten... these two effects scale together. The more that poverty is reduced, and the faster the economy grows, the more animals are likely to be eaten. This is an "indirect bad outcome" that I'm actually happy to see in some sense, because the existence of the bad outcome indicates that I succeeded in my primary goal.

It's as though someone were to warn me about donating to an X-risk-reduction organization by pointing out that more humans living their lives implies that more humans will get cancer. Cancer is definitely a harm related to being alive, but it's one that I'm implicitly prepared to accept in the course of helping more humans exist. If you came back from some point in the future to tell me that the cancer rate had remained constant over time, but that ten billion humans had cancer, I'd probably be very happy to hear the news, because it would imply the existence of hundreds of billions of cancer-free humans spread out across planets or artificial interstellar habitats.

Meanwhile, if someone were to tell me that they think cancer is so bad that it makes additional years of life net-negative, I'd tell them to support promising cancer treatments rather than even considering projects that create more years of human life.

If a socialist tells me they're concerned about poor people losing autonomy as a result of charitable giving, my response would be something like: "...okay. That's going to be par for the course in this entire category of projects. By the project's very nature, it should be clear that it's not something you'll want to support if you generally oppose charity." And then I'd produce a report intended for people who do believe charity generally does more good than harm, because they're the only ones who might actually satisfy their values by donating.

----

This is oversimplified and unsophisticated, and it's easy to think of counterarguments. But I still feel as though "losing autonomy" is a very different kind of concern than, say, "Donational falls apart, and its initial corporate partners become much less likely to run effective giving programs in the future". The latter is inherent to specific features of Donational, rather than specific features of charitable giving, so it helps me decide between Donational and other charitable projects. The former doesn't help me make that choice.


On another note, I second some of Oli's concerns; I wish the section on Donational's basic strategy had been a lot longer. Things I don't think were addressed:

  • Who are Donational's competitors in the corporate-giving space?
  • How large is the total market for a product like Donational's?
  • What is Donational's pitch to COOs who have predictable objections?
    • For example, a platform that only features a tiny set of charities will naturally be less appealing to executives than a platform with a wider range, because it will annoy employees and may not appeal to executives' views on the best causes.
    • I was startled to see Donational decide to limit its selection so drastically -- it feels like an enormous product change that will seem like regression to almost all customers. I'm surprised that the CEO found it worth doing just to appeal to our small donor community. What will actual VCs think? (Perhaps the company never intends to raise a private funding round, but VCs aside, there are still customers to consider.)

On the whole, while I understand that this is an atypical organization to evaluate in this way, I felt like I was seeing many indirect/correlational measures of success ("CEOs with these traits tend to run good organizations") and little explicit discussion of the program's strategy ("this is how they plan to find companies who might be good customers for their service"). I generally prioritize the latter ahead of the former.

Comment by aarongertler on The EA Forum is a News Feed · 2019-08-01T00:25:58.490Z · score: 4 (2 votes) · EA · GW
...an area for new people to ask questions or introduce themselves without fear of being reprimanded or wasting the time of other users.

The "Ask Question" feature is meant to give people a way to ask questions about introductory EA concepts (or other questions, of course).

When I communicate with new people, I try to emphasize that they aren't wasting anyone's time, and I hope that our About page also helps to get this message across.

It's possible that a dedicated "introduce yourself" thread could be useful (though I'd probably want to disable voting, as it's a bit weird to have your introduction voted on). Forum bios are also a reasonable way to do this without a dedicated thread.

----

Also, I've been happy to see that the Forum hasn't developed a pattern of "reprimanding new users". I wish this were easier to communicate to new users! (As I mentioned elsewhere on this thread, it's understandable to feel intimidated when you join a new community.)

Comment by aarongertler on Is running Folding@home / Rosetta@home beneficial? · 2019-07-31T22:54:08.469Z · score: 9 (6 votes) · EA · GW

Rather than get into the details, I'll make the meta-level point that the impact of your action here is likely to be very small in one direction or another.

At best, you are one more computer in a network of millions*; at worst, you've added a tiny amount of pollution to the air, which might take a few minutes to an hour off of humanity's collective lifespan, if we stick to Gwern's reasoning -- you might waste more human life in the course of spending time to install the software than you would actually running the program.

Meanwhile, the "indirect costs" are based mostly on money you could otherwise donate to charity, a consideration which could come up every time you spend money on anything (and which is generally better to ignore unless you're making a big spending decision; I wouldn't worry about $10/year).

Given the complexity of the issue (e.g. trying to calculate your computer's extra electricity usage, evaluating the expected value of papers produced through FAH), I would recommend against trying to make a serious calculation of your impact. As with many questions people ask in EA spaces, "don't worry about it" is a reasonable answer.

----

*There are only about 100,000 machines in the FAH network right now, but many of those were designed specifically for high-performance computation; I'd be unsurprised if an average home machine contributed one-millionth or less of the project's processing power.

Comment by aarongertler on The EA Holiday Calendar · 2019-07-31T22:32:04.089Z · score: 2 (1 votes) · EA · GW

Thanks for these suggestions! The founding of GWWC might work well, as the "birthday" of a living organization rather than a living person.

Comment by aarongertler on William Rathbone: 19th-century effective altruist? · 2019-07-31T22:29:39.075Z · score: 4 (3 votes) · EA · GW

According to Wikipedia, Rathbone served in Parliament for nearly 30 years and did a lot for the nursing reform movement. I don't know of any sources for how he might have influenced other wealthy philanthropists at the time. (He was a contemporary of Carnegie and Rockefeller, but I don't know of any evidence that he influenced their giving.)

The book linked in the second quote might contain more information on this.

Comment by aarongertler on Four practices where EAs ought to course-correct · 2019-07-31T22:19:12.182Z · score: 29 (12 votes) · EA · GW

I work for CEA, but these views are my own.

Ruthlessness comment:

Short version of my long-winded response: I agree that promotion is great and that we should do more of it if we see growth slowing down, but I don't see an obvious reason why promotion requires "ruthlessness" or more engagement with criticism.

  • I'm in favor of promoting EA, and perhaps being a bit less humble than we have been in our recent public communication. I really like cases where someone politely but firmly rebukes bad criticism. McMahan 2016 is the gold standard, but Will MacAskill recently engaged in some of this himself.
  • At the same time, I've had many interactions, and heard of many more interactions, where someone with undeniable talent explained that EA came across to them as arrogant or at least insufficiently humbled, to an extent that they were more reluctant to engage than they would have been otherwise.
    • Sometimes, they'd gotten the wrong idea secondhand from critics, but they were frequently able to point to specific conversations they'd had with community members; when I followed up on some of those examples, I saw a lot of arrogant, misguided, and epistemically sketchy promotion of EA ideas.
    • The latter will happen in any movement of sufficient size, but I think that a slight official move toward ruthlessness could lead to a substantial increase in the number of Twitter threads I see where someone responds to a reasonable question/critique with silly trolling rather than gentle pushback or a link to a relevant article.
  • How frequently are people who take action for their "teams" actually doing something effective? For all the apparent success of, say, extremist American political movements, they are much smaller and weaker in reality than the media paints them as. Flailing about on Twitter rarely leads to policy change; donor money often goes to ridiculous boondoggle projects rather than effective movement-building efforts. I can't think of many times I've seen two groups conflict-theorying at each other and thought "ah, yes, I see that group X is winning, and will therefore get lasting benefit from this fight".
    • So far, EA has done a pretty good job of avoiding zero-sum fights in favor of quiet persuasion on the margins, and in doing so, we've moved a lot of money/talent/attention.
    • If we start picking more fights (or at least not letting fights die down), this ties up our attention and tosses us into a very crowded market.
      • Can EA get any appreciable share of, say, anti-Trump dollars, when we are competing with the ACLU and Planned Parenthood?
      • Will presidential candidates (aside from Andrew Yang) bother to call for more aid funding or better AI policy when constituencies around issues like healthcare and diversity are still a hundred times our size?

It is likely that popular appeal can help EA achieve some of its aims; it could grow our talent pool, increase available fundraising dollars, and maybe help us push through some of our policy projects.

On the other hand, much of the appeal that EA already has is tied to the way it differs from other social movements. Being "nice" in a Bay Area/Oxford sense has helped us attract hundreds of skilled people from around the world who share that particular taste (and often wound up moving to Oxford or the Bay Area). How many of these people would leave, or never be found at all, if EA shifted in the direction of "ruthlessness"?

----

But this all feels like I'm nitpicking at one half of your point. I'm on board with this:

Every person needs to look in their own life and environment to decide for themselves what they should do to develop a more powerful EA movement, and this is going to vary person to person.

Some people are really good at taking critics apart, and more power to them. Even more power to people who can produce wildly popular pro-EA content that brings in lots of new people; Peter Singer has been doing this for decades, and people like Julia Galef and Max Roser and Kelsey Piper are major assets.

But "being proud of EA and happy to promote it" doesn't have to mean "getting into fights". Total ignorance of EA is a much larger (smaller?) bottleneck to our growth than "misguided opposition that could be reversed with enough debate".

So far, the "official"/"formal" EA approach to criticism has been a mix of "polite acknowledgement as we stay the course", "crushing responses from good writers", and "ignoring it to focus on changing the world". This seems basically fine.

What leads you to believe that the problem of "growth tapering off" is linked to "insufficient ruthlessness" rather than "insufficient cheerful promotion without reference to critics"?

Comment by aarongertler on Four practices where EAs ought to course-correct · 2019-07-31T21:48:41.060Z · score: 13 (6 votes) · EA · GW

Methodology comment:

  • I've been saying this in various comments for a long time, and I was glad to see the point laid out here in more detail.
    • My comments often look like: "When you say that 'EA should do X', which people and organizations in EA are you referring to? What should we do more/less of in order to do less/more of X? What are cases where X would clearly be useful?"
  • I'm a big fan of the two-paper rule, and will try to remember to apply it when I respond to methodology-driven posts in the future.

Regarding this claim:

EA has made exactly one major methodological step forward since its beginnings, which was identifying the optimizer's curse about eight years ago, something which had the benefit of a mathematical proof.

I appreciate that you went on to qualify this statement, but I'd still have appreciated some more justification. Namely, what are some popular ideas that many people thought were a step forward, but that you believe were not?

If methodological ideas generally haven't been popular, EA wouldn't be emphasizing methodology; if they were popular, I'd be curious to see any other writing you've done on reasons you don't think they helped. (I realize that would be a lot of work, and it may not be a good use of your time to satisfy my curiosity.)

When I look at the top ~50 Forum posts of all time (sorted by karma), I only see one that is about methodology, and it's not as much prescriptive as it is descriptive ("EA is biased towards some methodologies, other methodologies exist, but I'm not actively recommending any particular alternatives"). Almost all the posts are about object-level research or community work, at least as far as I understand the term "object-level".

I can only think of a few cases when established EA orgs/researchers explicitly recommended semi-novel approaches to methodology, and I'm not sure whether my examples (cluster thinking, epistemic modesty) even count. People who recommend, say, using anthropological methods in EA generally haven't gotten much attention (as far as I can recall).

Comment by aarongertler on Four practices where EAs ought to course-correct · 2019-07-31T21:27:57.509Z · score: 20 (10 votes) · EA · GW

I found this to be thought-provoking and I'm glad you posted it. With that in mind, this list of points will skew a bit critical, as I'm more interested to see responses in cases where I disagree.

Diet change comment:

  • I haven't seen much general EA advocacy for going veg*n, even from organizations focused on animal welfare (has a single person tried to make this argument on the Forum since the new version was launched?).
    • Instead, I think that most veg*n people in EA are doing so out of a personal desire to avoid action they see as morally wrong, rather than because they have overinflated estimates of how much good they are accomplishing. I made a poll to test this assumption.
    • Anecdotally, I basically agree with your numerical argument. I eat a lot of cheese, some other dairy products, and occasionally meat (maybe once or twice a month, mostly when I'm a guest at the house of a non-vegetarian cook, so that they don't have to worry about accommodating me). But I still eat less meat than I used to (by a factor of at least ten), for a variety of EA-ish reasons:
      • I feel worse about minor immoral action than I used to.
      • I'm surrounded by people who have much stronger views about animal suffering than I do and who eat veg*n diets.
      • I enjoy supporting (read: taste-testing) meat alternatives.
      • I think there's a chance that Future Me will someday look back at Past Me with Future Ethics Glasses and be annoyed about the meat I was consuming.
Comment by aarongertler on Editing available for EA Forum drafts · 2019-07-30T22:23:53.071Z · score: 3 (2 votes) · EA · GW

You could try sending a link to a Google Doc via OneTimeSecret.

Comment by aarongertler on The EA Forum is a News Feed · 2019-07-29T20:16:01.666Z · score: 13 (5 votes) · EA · GW

On the topic of "writing posts is intimidating": This can definitely be true! I still remember the first time I submitted a full post to LessWrong, checking back every few hours for comments and wondering whether certain Famous (To Me) Internet People would see it.

On the content side, I'm trying to make writing posts easier by offering review of post drafts (and ideas that haven't become drafts yet). I've only heard from a couple of people so far, and have lots of capacity to provide feedback!

Comment by aarongertler on What posts you are planning on writing? · 2019-07-28T13:24:47.704Z · score: 6 (5 votes) · EA · GW

It seems more likely that not (at least to me) that EA will make only a small dent in history, if it is remembered at all. The post explores what might happen in the timelines where we succeed.

Comment by aarongertler on EA Forum Prize: Winners for June 2019 · 2019-07-27T02:37:32.664Z · score: 2 (1 votes) · EA · GW

One direct example, from a December winner:

I was definitely motivated to invest more time and effort by the hope of winning the prize (along with the satisfaction of getting front page with a lot of karma).

Anecdotally, I've heard from a few other winners and authors that the Prize is a motivating factor to spend more time/effort on posts, but I couldn't find public statements to that effect.

Comment by aarongertler on EA Forum Prize: Winners for June 2019 · 2019-07-27T02:34:13.130Z · score: 11 (6 votes) · EA · GW
I expect the same post written by someone else would have not received much prominence and I expect would have very unlikely been selected for a prize.

I'm not sure about this. One of last month's winners, "Aligning Recommender Systems," also outlined an argument for EAs gaining experience/pursuing careers in a field that hadn't been covered much or at all by prior authors, and was highly upvoted. As far as I know, neither author works for an EA organization (though I don't know much about their background, and would appreciate someone correcting me if I'm wrong).

I think it's particularly bad for posts to get prizes that would have been impossible to write when not coming from an established organization.

How do you feel about posts which would have been almost impossible to write for authors who weren't in some other exceptional circumstance?

For example, during the first month of prize selection, one winner was Adam Gleave, who wrote a great post about deciding what to do with his winnings from the EA Donor Lottery. I'd guess that only someone with unusual financial resources would have been able to make such large donations (and get statements from ALLFED, etc.) which left me uncertain at the time whether Adam's post should have qualified.

The main difference here seems to be that he sacrificed a lot of his free time to conduct research and write a post, but I still expect that other authors with equal willingness to research and write wouldn't have gotten as much attention.

--

On another note, I think that some posts in this category are highly valuable. For example, someone working at an org might write a very detailed post on operations that they couldn't have written without experience running large-scale EA events. If this kind of post wouldn't be written in someone's spare time without incentives (which I know is a big assumption), I'd like to provide those incentives.

Comment by aarongertler on EA Forum Prize: Winners for June 2019 · 2019-07-27T02:21:09.493Z · score: 18 (7 votes) · EA · GW

Ozzie,

Thanks for this feedback! I was thinking about exactly the same issue as I counted the votes and wrote up this post.

--

Back when we were setting up initial rules for the Prize, I wasn’t sure whether to allow posts written on “org time” (that is, by employees of EA organizations who were paid by their employers for Forum work). Eventually, I decided to err on the side of making almost all posts eligible as a starting point, but to keep an eye on which types of posts were winning.

This is the first month (out of eight) that all winning posts have come from the employees of EA orgs; since the Prize began in November, roughly half of the winning posts have come from Forum contributors who (as far as I know) weren’t employed in direct work at the time, or were writing about subjects unrelated to their direct work. Some of the other half were written by org employees who drew on their work experience, but in cases where I’m not sure whether they were paid to do so (e.g. November’s winning post on EAF’s hiring process).

This doesn't indicate that posts from employees of EA orgs should necessarily remain in the same category, but I did want to note that this month was anomalous. (We certainly don’t intend to be rating “the top serious EA organization documents.”)

---

Some thoughts on ways we could address this concern:

  1. The comment prize, which we'll be starting up next month, should help us highlight contributions that didn't require as much time to make, and I could imagine scaling it up over time (in the sense of "amount awarded for comments relative to posts"). I noted this in my initial post:
We also hope that a “comment prize” will make it easier to recognize people who contribute their ideas without publishing full-fledged research posts.

2. Some organizations have been unusually thorough in posting on the Forum, and this is something we'd like to highlight and encourage (whether through a prize or some other means). For example, researchers from Rethink Priorities have spent a lot of additional time formatting posts and responding to comments, rather than only cross-posting research from their website.

3. It's possible that posts produced by organizations should be in a separate category, though it's tricky to define when this is the case. For example, Open Phil is a very different kind of research organization than a smaller org like ALLFED or AI Impacts, and I’m uncertain how to define people who are freelance researchers working off of a small grant or commission. It’s also hard to tell when something was or was not written on “paid time” by the employee of an EA organization.

Personally, I have a higher bar on voting for posts that come from org employees, but I’ll disclose that I did vote for each of the winning posts this month — I thought that the invertebrate sentience and nuclear risk series were especially outstanding, even by the standards of EA research organizations.

This is something I and the other judges will be discussing in future months, and if you have further thoughts, I’d appreciate hearing them!

Comment by aarongertler on 'Longtermism' · 2019-07-26T07:23:38.323Z · score: 15 (7 votes) · EA · GW

"Update my views of the post" probably wasn't the right phrase to use -- better would be "update my views of whether the post is a good thing to have on the Forum in something like its current form".

In general, I have a strong inclination that people should post content on the Forum if it is related to effective altruism and might be valuable to read for even a small fraction of users. I'm not concerned about too many posts being made (at least at the current level of activity).

I might be concerned if people seem to be putting more time/effort into posts than is warranted by the expected impact of those posts, but I have a high bar to drawing that conclusion; generally, I'd trust the author's judgment of how valuable a thing is for them to publish over my own, especially if they are an expert writing about something on which I am a non-expert.

Even if the information in this post wasn't especially new (I'm not sure which bits have and haven't been discussed elsewhere), I expect it to be helpful to EA-aligned people who find themselves trying to communicate about longtermism with people outside of EA, for some of the reasons Will outlined. I can imagine referring to it as I work on an edition of the EA Newsletter or prepare for an interview with a journalist.

--

Finally, on hyphenation:

a. There are at least two occasions in the last two months that I, personally, have had to decide how to spell "longtermism" in something written for a public audience. And while I am an unusual case...

b. ...hyphenation matters! Movements look less professional when they can't decide how to express terms they often use in writing (hyphenation, capitalization, etc.) Something like this makes me say "huh?" before I even start reading the article (written by a critic of a movement in that case, but the general point stands).

These are far from the most important paragraphs ever published on the Forum, but they do take a stand on a point with two reasonable sides and argue convincingly for one of them, in a way that could change how many readers refer to a common term.

Comment by aarongertler on What posts you are planning on writing? · 2019-07-26T07:03:37.131Z · score: 2 (1 votes) · EA · GW

I found saulius' post useful in different ways than Chris Smith's. I especially like that it covers mistakes that seem more "basic" and easier to avoid/correct for. But "The Optimizer's Curse" is also worth looking at.

Comment by aarongertler on 'Longtermism' · 2019-07-26T05:10:56.609Z · score: 26 (14 votes) · EA · GW

I downvoted the above comment, because I think it is more critical than helpful in a way that mildly frustrates me (because I'm not sure quite what you meant, or how to update my views of the post in response to your critique) and seems likely to frustrate the author (for similar reasons).

What is your goal in trying to make points about whether this information is "important to stay up-to-date about" or worth being "six paragraphs" long?

Do you think this post shouldn't have been published? That it should have been shorter? That it would have been good to include more justification of the content before getting into detail about these definitions?

Comment by aarongertler on 'Longtermism' · 2019-07-26T05:06:27.985Z · score: 25 (11 votes) · EA · GW

Even if the Forum isn't a "well-targeted place" for a certain piece of EA content, it still seems good for things to end up here, because "getting feedback from people who are sympathetic to your goals and have useful background knowledge" is generally a really good thing no matter where you aim to publish something eventually.

Perhaps there will come a time in the future when "longtermism" becomes enough of a buzzword to justify clarification in a mainstream opinion piece or journal article. At that point, it seems good to have a history of discussion behind the term, and ideally one meaning that people in EA already broadly agree upon. ("This hasn't been debated recently" =/= "we all have roughly the same definition that we are happy with".)