Posts

80,000 Hours: Anonymous contributors on EA movement growth 2020-02-18T00:09:58.434Z · score: 30 (12 votes)
EA Organization Updates: January 2020 2020-02-14T06:19:35.194Z · score: 23 (11 votes)
Poverty in Depression-era England: Excerpts from Orwell's "Wigan Pier" 2020-02-12T01:01:42.776Z · score: 15 (4 votes)
Anonymous contributors answer: How honest and candid should high-profile people be? 2020-02-12T00:14:34.254Z · score: 21 (6 votes)
AI Impacts: Historic trends in technological progress 2020-02-12T00:08:21.539Z · score: 52 (20 votes)
Volunteering isn't free 2020-02-04T09:04:26.152Z · score: 33 (20 votes)
80,000 Hours: Ways to be successful that people don't talk about enough 2020-01-31T09:59:02.986Z · score: 11 (5 votes)
EA Forum Prize: Winners for December 2019 2020-01-27T10:33:16.359Z · score: 29 (13 votes)
Lewis Bollard: 10 Years of Progress for Farm Animals 2020-01-24T12:47:21.432Z · score: 23 (9 votes)
EA Organization Updates: December 2019 2020-01-16T11:47:54.077Z · score: 27 (10 votes)
EA Forum Prize: Winners for November 2019 2020-01-16T00:56:19.753Z · score: 26 (8 votes)
Five GCR grants from the Global Challenges Foundation 2020-01-16T00:46:05.580Z · score: 31 (10 votes)
Notes on hiring a copyeditor for CEA 2020-01-09T12:56:37.126Z · score: 88 (44 votes)
Reddit highlight: EA and socialism 2020-01-03T13:46:40.508Z · score: 19 (8 votes)
Purchase fuzzies and utilons separately (Eliezer Yudkowsky) 2019-12-27T02:21:19.723Z · score: 37 (17 votes)
80,000 Hours: Mistakes people make when deciding what work to do 2019-12-27T02:16:46.349Z · score: 11 (6 votes)
Open Philanthropy Staff: Suggestions for Individual Donors (2019) 2019-12-26T17:13:33.894Z · score: 21 (14 votes)
EA Organization Updates: November 2019 2019-12-18T10:39:08.717Z · score: 32 (13 votes)
"Altruism-driven research" (EA meets... plant pathology?) 2019-12-18T02:35:58.886Z · score: 14 (7 votes)
80,000 Hours: Bad habits among people trying to improve the world 2019-12-12T21:04:14.949Z · score: 26 (12 votes)
EA Forum Prize: Winners for October 2019 2019-12-11T10:37:12.132Z · score: 23 (14 votes)
aarongertler's Shortform 2019-11-15T12:40:12.085Z · score: 7 (1 votes)
80,000 Hours: Advice on how to read our advice 2019-11-15T00:00:29.874Z · score: 36 (8 votes)
EA Organization Updates: October 2019 2019-11-14T10:48:54.177Z · score: 33 (15 votes)
80,000 Hours: Before committing to management consulting, consider [other options] 2019-11-12T01:22:33.478Z · score: 14 (3 votes)
EA Leaders Forum: Survey on EA priorities (data and analysis) 2019-11-12T01:14:44.040Z · score: 91 (44 votes)
Wild animal welfare in Hans Christian Andersen (Julia Wise) 2019-11-12T00:42:05.726Z · score: 20 (7 votes)
Forum update: New features (November 2019) 2019-11-09T01:00:50.805Z · score: 57 (24 votes)
EA Forum Prize: Winners for September 2019 2019-10-31T08:01:50.508Z · score: 24 (16 votes)
Reflections on EA Global London 2019 (Mrinank Sharma) 2019-10-29T23:00:11.710Z · score: 24 (9 votes)
How we think about the Forum 2019-10-15T16:24:04.447Z · score: 24 (12 votes)
Who runs the Forum? 2019-10-14T15:35:21.353Z · score: 41 (29 votes)
EA Organization Updates: September 2019 2019-10-10T10:05:47.305Z · score: 33 (12 votes)
List of EA-related email newsletters 2019-10-09T09:51:46.668Z · score: 16 (8 votes)
80,000 Hours: How useful are long-term career plans? 2019-10-09T01:03:42.279Z · score: 11 (4 votes)
EA Handbook 3.0: What content should I include? 2019-09-30T09:17:55.464Z · score: 44 (24 votes)
EA Forum Prize: Winners for August 2019 2019-09-24T03:06:08.264Z · score: 32 (14 votes)
Take 80,000 Hours' Annual Impact Survey 2019-09-18T04:26:40.144Z · score: 13 (4 votes)
Book launch: "Effective Altruism: Philosophical Issues" (Oxford) 2019-09-18T00:23:14.849Z · score: 37 (20 votes)
Forum Update: New Features (September 2019) 2019-09-17T08:43:31.904Z · score: 34 (13 votes)
EA Organization Updates: August 2019 2019-09-12T08:54:50.734Z · score: 25 (12 votes)
EA Forum Prize: Winners for July 2019 2019-08-20T07:09:17.771Z · score: 24 (10 votes)
How do you, personally, experience "EA motivation"? 2019-08-16T10:04:18.156Z · score: 31 (14 votes)
EA Organization Updates: July 2019 2019-08-07T13:27:10.778Z · score: 48 (27 votes)
The Unit of Caring: On "fringe" ideas 2019-08-02T03:56:40.650Z · score: 68 (28 votes)
The EA Holiday Calendar 2019-07-30T09:03:32.033Z · score: 24 (13 votes)
William Rathbone: 19th-century effective altruist? 2019-07-30T06:14:12.215Z · score: 15 (8 votes)
EA Forum Prize: Winners for June 2019 2019-07-25T08:36:56.099Z · score: 32 (14 votes)
Editing available for EA Forum drafts 2019-07-24T05:56:20.445Z · score: 90 (43 votes)
EA Forum Prize: Winners for May 2019 2019-07-12T01:48:57.209Z · score: 25 (9 votes)

Comments

Comment by aarongertler on Shoot Your Shot · 2020-02-20T22:42:16.976Z · score: 3 (2 votes) · EA · GW

Don't worry about "preaching to converts" in your Splash class; I very much doubt many of your students will have any familiarity with EA beyond a passing mention somewhere.

Discussing effective tactics for promoting EA would take a long time. If you want to learn about some things other folks have done, check out the EA Hub's list of resources or the top community posts on the Forum (not everything at that link will be about promotion, but if you skip around you'll find some relevant articles).

With cause prioritization (and other topics), you'll probably be fine as long as you avoid negativity. My framing is never "don't work on X"; instead, it's (to paraphrase): "what are you hoping to get by working on X? Does it seem to be working? What led you to working on X rather than other things in the same general area?" My overall message is "everyone sees the world a little differently, but for any way you see the world, there will be some strategies for helping that are likely to work out better than others. Cause prioritization is about figuring out the best thing you can be doing, according to your values."

Prioritization isn't exclusive to EA: Other entities do it all the time based on their own values (e.g. environmental agencies trying to weigh policies by how they affect the lives of citizens, but not necessarily people in other countries). EA just has fewer limits on the sorts of ideas it considers, and on which beings we care about helping.

(This is a very rough perspective, and belongs to me rather than my employer, but the point of "work with people's values, don't tell them to value other things" stands.)

Comment by aarongertler on Chloramphenicol as intervention in heart attacks · 2020-02-20T02:31:30.455Z · score: 3 (2 votes) · EA · GW

On the "20 QALY per application" figure, I have some questions:

  • What fraction of heart attack patients, if saved in this way, will not have another lethal heart attack within the next few years? 
  • What fraction of such patients are already very old and suffering from other health problems? 
  • To what extent might a nonlethal heart attack still lead to vulnerability and muscle weakness later on, making someone more susceptible to death?

I wouldn't be surprised if the true number were more like 2 QALY/application rather than 20 (still not a bad thing to try at that point if you think the other numbers fit; just wanted to call out this particular issue).

Comment by aarongertler on Chloramphenicol as intervention in heart attacks · 2020-02-20T02:27:58.788Z · score: 2 (1 votes) · EA · GW

I wish that the post had been more clear about this. It could still be promising to put together a human trial, of course, but success is far from certain.

Comment by aarongertler on Shoot Your Shot · 2020-02-20T02:25:21.866Z · score: 4 (2 votes) · EA · GW

Having taught a couple of Splash classes, and having read through SHIC's suspension post (which discusses their struggles in doing impactful work with pre-college students), I wouldn't expect the class to lead to much impact. However, it sounds like an opportunity to practice discussing EA in front of a forgiving audience, and might inspire a couple of students down the line; good luck!

While effective altruism (just like every social movement ever) has critics, it's a relatively safe thing to advocate for; almost every mainstream article/video/etc. published about it nowadays is positive, and in my experience, people almost always think it's a good idea when I present it as "trying to do things that will really help people, rather than ignoring their needs in favor of what we think will help". 

You might be interested in CEA's list of common objections to EA and how we respond to them. It's a bit out-of-date, but I still hear all these objections on Twitter, so I imagine you could hear them as well.

Comment by aarongertler on How much will local/university groups benefit from targeted EA content creation? · 2020-02-20T02:16:14.063Z · score: 3 (2 votes) · EA · GW

Meta-comment: I hope that as the Forum becomes more popular, it becomes an easy way for group organizers and other people who run events to ask questions like this (and be directed to EA Hub/other resources). The movement as a whole can save a lot of effort if we get used to thinking: "There's a good chance someone did/tried this; I'll ask a Forum question to see if anyone can help me avoid reinventing the wheel."

(Of course, Facebook groups are also great for this; I just want someone's first instinct in cases where this much time is at stake to be "huh, let's see who's done this before", whichever online communities they are a part of.)

Comment by aarongertler on Looking for Research Participants · 2020-02-19T12:09:20.125Z · score: 2 (1 votes) · EA · GW

Are you open to conducting remote interviews? If so, I'd be interested: contact me at aaron@effectivealtruism.org.

Comment by aarongertler on Biggest Biosecurity Threat? Antibiotic Resistance · 2020-02-18T23:28:40.190Z · score: 2 (3 votes) · EA · GW

Welcome to the Forum! Thanks for asking an interesting question.

I'm not aware of any EA funding going toward antibiotic resistance, though it was the subject of an Open Philanthropy shallow cause writeup (and there may be funding I don't know about). 

Also, I'd recommend you include links to the papers you are citing to make it easier for people to follow your argument (you can highlight text in the Forum's editor to get a "link" button that lets you add a URL).

Finally, while I don't know much about this topic in particular, "1 in 2 Americans don't know how to use antibiotics appropriately" plus "1 in 2 cases of resistance come about as the result of antibiotic misuse" doesn't seem to necessarily imply that education is the best way to respond to AR issues. 

For example, we could change the way doctors prescribe antibiotics to make misuse less likely without changing the way we educate patients (see this example from the UK's Behavioural Insights team). We may also wind up focusing on resistance that comes from sources other than misuse, if there are effective solutions in those areas. Sometimes, the most effective way to work on a problem doesn't involve tackling its biggest sub-problem.

Comment by aarongertler on Illegible impact is still impact · 2020-02-18T08:52:25.422Z · score: 14 (4 votes) · EA · GW

This is a really good post! I often have difficulty trying to estimate my own illegible impact or that of other people. Here are some thoughts on the situation in general:

  1. People should take more time to thank others who have helped them would increase the amount of legible impact in the movement. I was startled to hear someone attribute their taking a job to me more than a year after the fact; this led me to update appropriately on the value of a prior project, and other projects of that type.
  2. It would be cool if people developed a habit of asking other people about impact they think they'd had. I'd love to see EA foster a culture where Bob can ask Alice "did our conversation last month have any detectable impact on you?", and Alice can answer truthfully without hurting Bob's feelings. (80,000 Hours and CFAR both seem to do a good job of hunting for evidence of illegible impact, though I'm concerned about the incentive fundraising organizations have to interpret this evidence in a way that overestimates their impact.)
  3. Small actions matter!
    1. I really appreciate people who take the time to vote on the Forum; very few posts get more than 50 votes, and many excellent posts only get a dozen or so. The more people vote, the better our sorting algorithm performs, and the more knowledge we (CEA) have about the types of content people find valuable. We have lots of other ways of trying to understand the Forum, of course, but data is data!
    2. Likewise, I'm really happy whenever I see someone provide useful information about EA to another person on Twitter or Reddit, whether that's "you might find this concept interesting" or "this claim you made about EA doesn't seem right, here's the best source I could find". If EA-affiliated people are reliably kind and helpful in various corners of the internet, this seems likely to contribute both to movement growth and to a stronger reputation for EA among people who prefer kind, helpful communities (these are often very good people to recruit).
Comment by aarongertler on How do you feel about the main EA facebook group? · 2020-02-15T01:51:23.357Z · score: 4 (2 votes) · EA · GW

You may be missing a lot of good comments on YouTube videos (at least, if you watch entertaining content that gets a lot of upvotes). Now that comments are filtered by a sort of "magic algorithm" (which I assume is similar to the Forum's -- recency and upvotes), top comments on positive/entertaining videos are regularly very funny and occasionally provide interesting background context.

That said, I can't speak to intellectual content, and I'm sure that "controversial content" comments are still terrible, because they lead to more upvoting of negative content that one side or the other wants to support.

Comment by aarongertler on How do you feel about the main EA facebook group? · 2020-02-15T01:48:24.169Z · score: 5 (3 votes) · EA · GW

The group seems very reasonable as a default place for people to be regularly reminded of EA topics as they go about their day. 

I can't think of a single large (5000+ people) Facebook group that regularly features interesting original discussions that aren't intruded on by aggression, trolling, memes, etc. In that context, I'm glad that the Facebook group has:

  • A good selection of top-level posts, thanks to the efforts of moderators
  • A much better tone of discussion than most large groups (it's very rare to see someone openly insult someone else without a moderator stepping in, and I saw quick action the one time I reported an aggressive comment)
  • A good amount of reasonable advice being given in response to quick questions (e.g. how to best persuade a company to add a charity to their matching program). Not all questions get good answers, but few seem to get bad answers.

I think that an outside observer who knew nothing about EA would look at the group and at least think "okay, these seem like well-meaning people who run a lot of different projects". If they thought the discussion was especially aggressive or low-quality in an epistemic sense, and saw his as a reason to think poorly of EA, I'd question their ability to take Internet norms into account.

(That said, the discussion quality seems much lower than on the Forum, in smaller EA Facebook groups, or on Discord, and I understand why someone would feel dismayed at the thought of how much better it could theoretically be.)

Comment by aarongertler on How do you feel about the main EA facebook group? · 2020-02-15T01:41:20.417Z · score: 2 (1 votes) · EA · GW

By "interesting posts", do you mean original writing that hasn't been posted elsewhere first?

Comment by aarongertler on Scientists’ attitudes towards improving the welfare of animals in the wild: a qualitative study · 2020-02-15T01:05:17.443Z · score: 4 (3 votes) · EA · GW

Thanks for sharing this! I don't think we have enough knowledge about the way experts in relevant fields (with no EA experience) react to some of the more unusual causes promoted by EA-aligned organizations. Studies like this, even with limited sample size, seem useful. 

If anyone seeing this comment read the summary and found it at least mildly interesting, I recommend at least skimming through the full paper so that you can see quotes from the qualitative interviews. Those quotes capture elements of the interviewees' thinking that are difficult to summarize.

Comment by aarongertler on Clean cookstoves may be competitive with GiveWell-recommended charities · 2020-02-15T00:54:18.190Z · score: 7 (4 votes) · EA · GW

Thanks for posting this shallow review! I strong-upvoted because I think it's really good for us to get more data on the Forum, even if it's shallow and flawed, as long as the author makes an effort to identify the flaws.

If you had a decent researcher who was willing to devote an extra week of work to the project (say, 25 focused hours plus check-ins with you), what are the questions you'd want them to cover?

Comment by aarongertler on Short-Term AI Alignment as a Priority Cause · 2020-02-15T00:25:43.635Z · score: 2 (1 votes) · EA · GW

I didn't see any mentions of existing organizations that work on recommender alignment (even if they don't use the "short-term aligned AI" framing). It sounds as though many of the goals/benefits you discuss here could come from tweaks to existing algorithms that needn't be connected to AI alignment (if Facebook wanted to focus on making users healthier, would it need "alignment" to do so?).

What do you think of the goals of existing "recommender alignment" organizations, like the Center for Humane Technology? They are annoyingly vague about their goals, but this suggestion sheet lays out some of what they care about: Users being able to focus, not being stressed, etc.

Comment by aarongertler on Founders Pledge Climate & Lifestyle Report · 2020-02-15T00:03:34.728Z · score: 4 (2 votes) · EA · GW

Thank you for sharing this to the Forum! I especially appreciate the "what we are not saying" section, which covers all the most common concerns I've seen around discussion of the topic. The frame of "expanding actions, rather than negating responsibility" is one I can imagine using when people ask about (EA + climate change) in the future.

Comment by aarongertler on EA Organization Updates: December 2019 · 2020-02-13T18:15:56.926Z · score: 2 (1 votes) · EA · GW

The Google Doc changes every month as orgs update over their old updates. I just copy-and-paste when it's done.  ¯\_('')_/¯ 

Hopefully, the heading just remains unchanged from now on!

Comment by aarongertler on Prioritizing among the Sustainable Development Goals · 2020-02-12T21:45:53.727Z · score: 3 (2 votes) · EA · GW

Thanks for sharing this! I appreciate seeing perspectives on cause prioritization from people who know the global development space well, even if the models/principles they use to set priorities differ from those most commonly used in EA. (See also the Copenhagen Consensus.)

Are you aware of any detailed responses from individual experts on how they actually chose their priority rankings?

Comment by aarongertler on Prioritizing among the Sustainable Development Goals · 2020-02-12T21:45:04.347Z · score: 4 (2 votes) · EA · GW

Note that goals around "reducing poverty" and "eliminating extreme poverty" are ranked much more highly than "boosting per capita GDP." Many who promote GDP growth would argue that such growth is highly correlated with reductions in poverty.

Comment by aarongertler on Poll - what research questions do you want me to investigate while I'm in Africa? · 2020-02-12T21:40:16.459Z · score: 7 (4 votes) · EA · GW

Have you read through GiveWell's site visit reports to get a sense for how they've done similar work before?

Also, interviewing people about difficult parts of their lives seems like it could be a negative experience for both parties without some amount of training; do you have experience in a relevant role (therapy, social work, etc.)?

Comment by aarongertler on Seeking a CEO for new x-risk funding charity in the UK · 2020-02-12T21:36:57.136Z · score: 4 (2 votes) · EA · GW

This seems common for lower-level roles, but I don't know that I've seen it for CEO-type roles. When I think about SF tech companies and the amount they pay to CEOs, the idea of a "referral bonus" of the usual size seems "of the wrong scale": "Thanks for helping us find this friend of yours who was worth millions to us. Here's $5,000 for you, friend of an elite executive." 

(Compare to the standard "your old roommate actually was good at programming! Here's a small bonus," which is how I picture internal recruiting processes [e.g. Google asking employees to help them find new developers].)

But of course, these firms will also shell out tens of thousands of dollars to recruiting firms for executive searches, so a referral bonus isn't far from an unreasonable expense in that context.

Comment by aarongertler on The Intellectual and Moral Decline in Academic Research · 2020-02-12T20:42:15.059Z · score: 3 (2 votes) · EA · GW

I agree with this comment and retracted my upvote for the same reason, though I thought the rest of Tom's comment was quite reasonable (see Alexey Guzey for some examples of quiet scientific progress).

Comment by aarongertler on EA Organization Updates: December 2019 · 2020-02-12T20:21:43.777Z · score: 3 (2 votes) · EA · GW

Thanks! Organizations submit their own names, so I hadn't realized this was a mistake, but I'm glad to have the proper title.

Comment by aarongertler on Poverty in Depression-era England: Excerpts from Orwell's "Wigan Pier" · 2020-02-12T01:50:32.667Z · score: 3 (2 votes) · EA · GW

Added a note, thanks!

Comment by aarongertler on “The Vulnerable World Hypothesis” (Nick Bostrom’s new paper) · 2020-02-11T04:52:10.954Z · score: 4 (2 votes) · EA · GW

Your link is broken, but it looks like the paper came out in September 2019, well after my comment (though my reservations still apply if those sections of the paper were unchanged).

Thanks for the update on media reporting! Vox also did a long piece on the working-paper version in Future Perfect, but with the nuance and understanding ef EA that oone would expect from Kelsey Piper.

Comment by aarongertler on What posts you are planning on writing? · 2020-02-07T10:13:35.331Z · score: 2 (1 votes) · EA · GW

Didn't write it, but have two-thirds of a draft lying around to finish someday.

Leading a group is a good signal, but for most jobs, I think other qualifications will also be important (though these could include "having a strong application and doing well on work tests"). If you're trying to do something that makes use of your econ knowledge (rather than your ops/organizing ability or general research skills), competing with PhDs will be tough.

I'm an unusual case, because I went to a one-off retreat for people interested in ops work at a time lots of orgs were hiring at once -- it was a bit like a "job fair". Had I not gone there, I'd have just kept checking the 80K job board, the "Effective Altruism Job Postings" Facebook group, and the websites of a few orgs I liked (if I'd seen that their jobs weren't being added to the board).

Comment by aarongertler on aarongertler's Shortform · 2020-02-06T12:43:20.043Z · score: 2 (1 votes) · EA · GW

The deliberate anonymity point is a good one. The ideal would be a distinct anonymous username the person doesn't use elsewhere, but this particular issue isn't very important in any case.

Comment by aarongertler on aarongertler's Shortform · 2020-02-06T02:59:31.310Z · score: 2 (1 votes) · EA · GW

This feels like the next-worst option to me. I think I'd find it easier to remember whether "fluttershy_forever" or "UtilityMonster" said something than to remember whether "AnonymousDog" or "AnonymousMouse" said it.

Comment by aarongertler on aarongertler's Shortform · 2020-02-06T02:58:19.005Z · score: 8 (6 votes) · EA · GW

Another brief note on usernames:

Epistemic status: Moderately confident that this is mildly valuable

It's totally fine to use a pseudonym on the Forum. 

However, if you chose a pseudonym for a reason other than "I actively want to not be identifiable" (e.g. "I copied over my Reddit username without giving it too much thought"), I recommend using your real name on the Forum.

If you want to change your name, just PM or email me (aaron.gertler@centreforeffectivealtruism.org) with your current username and the one you'd like to use.

Reasons to do this:

  • Real names make easier for someone to track your writing/ideas across multiple platforms ("where have I seen this name before? Oh, yeah! I had a good Facebook exchange with them last year.")
  • There's a higher chance that people will recognize you at meetups, conferences, etc. This leads to more good conversations!
  • Aesthetically, I think it's nice if the Forum feels like an extension of the real world where people discuss ways to improve that world. Real names help with that. 
    • "Joe, Sarah, and Vijay are discussing how to run a good conference" has a different feel than "fluttershy_forever, UtilityMonster, and AnonymousEA64 are discussing how to run a good conference".

Some of these reasons won't apply if you have a well-known pseudonym you've used for a while, but I still think using a real name is worth considering.

Comment by aarongertler on aarongertler's Shortform · 2020-02-06T02:54:15.227Z · score: 3 (2 votes) · EA · GW

Brief note on usernames:

Epistemic status: Kidding around, but also serious

If you want to create an account without using your name, I recommend choosing a distinctive username that people can easily refer to, rather than some variant on "anonymous_user".

Among usernames with 50+ karma on the Forum, we have:

  • AnonymousEAForumAccount
  • anonymous_ea
  • anonymousthrowaway
  • anonymoose

I'm pretty sure I've seen at least one comment back-and-forth between two accounts with this kind of name. It's a bit much :-P
 

Comment by aarongertler on Linch's Shortform · 2020-02-04T06:03:02.574Z · score: 2 (1 votes) · EA · GW

Makes sense, thanks! The use of "doubling GDP is so massive that..." made me think that you were taking that as given in this example, but worrying that bad things could result from GDP-doubling that justified conservatism. That was certainly only one of a few possible interpretations; I jumped too easily to conclusions.

Comment by aarongertler on When to post here, vs to LessWrong, vs to both? · 2020-02-03T12:52:19.768Z · score: 6 (4 votes) · EA · GW

I'm not totally familiar with LW's content rules, but as for the Forum: You can post anything that follows our rules (don't be mean, don't hurt people, promote good discourse). 

At present, we don't categorize posts as "Frontpage" or "Community" unless they have some clear relevance to EA, but that can be as easy as taking your post about "how to think good" and adding a few sentences at the beginning to explain its relevance to one or more issues/areas/open questions within EA.

Comment by aarongertler on When to post here, vs to LessWrong, vs to both? · 2020-02-03T12:48:38.177Z · score: 5 (3 votes) · EA · GW

Moral uncertainty material definitely fits the EA Forum, and so do posts about applying general decision-making practices to altruism (we have lots of those on the Forum already). We even have a good number of posts written by people in the EA-sphere that would be equally at home on the Forum or LW (one example).

Comment by aarongertler on Ramiro's Shortform · 2020-01-31T10:02:30.815Z · score: 3 (2 votes) · EA · GW

This suggestion is worth posting in other places. You could consider emailing places like Forethought or FHI that have a lot of philosophers, or posting in FB groups like "EA Fundamental Research" or "EA Volunteering".

Comment by aarongertler on EA Forum Prize: Winners for November 2019 · 2020-01-31T05:48:10.875Z · score: 3 (2 votes) · EA · GW

We currently have an "All Comments" page, but it can't yet be filtered or sorted. To my knowledge, we aren't planning to build those features for now, but I've passed your feedback along to our tech team as a potential future priority.

Comment by aarongertler on It's OK to feed stray cats · 2020-01-30T14:06:17.137Z · score: 4 (3 votes) · EA · GW

If you think a community has a "local kindness gap" that you can fill, and that gap seems to be reducing how well that community is doing at achieving its goals, it's reasonable to think that being a kind person in that community will end up doing more good than you'd expect to do if you were being kind in a random other community.

That said, there are also downsides to strengthening bubbles, and I'd expect (quick thoughts, haven't pondered this much) that a "locally kind person with EA inclinations" would be most effective in place that has a small/new EA community, where the marginal value of extra (dinner hosting/event organizing/grabbing coffee with new arrivals) seems higher than in a place where there are already lots of events and chances for new folks to get involved.

Comment by aarongertler on Linch's Shortform · 2020-01-30T13:48:53.569Z · score: 2 (1 votes) · EA · GW

If you email this to him, maybe adding a bit more polish, I'd give ~40% odds he'll reply on his blog, given how much he loves to respond to critics who take his work seriously.

It's not hard for me to imagine situations bad enough to be worse than doubling GDP is good

I actually find this very difficult without envisioning extreme scenarios (e.g. a dark-Hansonian world of productive-but-dissatisfied ems). Almost any situation with enough disutility to counter GDP doubling seems like it would, paradoxically, involve conditions that would reduce GDP (war, large-scale civil unrest, huge tax increases to support a bigger welfare state).

Could you give an example or two of situations that would fit your statement here?

Comment by aarongertler on evelynciara's Shortform · 2020-01-30T13:43:13.275Z · score: 3 (2 votes) · EA · GW

Interesting op-ed! I wonder to what extent these issues are present in work being done by EA-endorsed global health charities; my impression is that almost all of their work happens outside of the conflict zones where some of these privacy concerns are especially potent. It also seems like these charities are very interested in reaching high levels of usage/local acceptance, and would be unlikely to adopt policies that deter recipients unless fraud concerns were very strong. But I don't know all the Top Charities well enough to be confident of their policies in this area.

This would be a question worth asking on one of GiveWell's occasional Open Threads. And if you ask it on Rob Mather's AMA, you'll learn how AMF thinks about these things (given Rob's response times, possibly within a day).

Comment by aarongertler on Seeking Advice: Arab EA · 2020-01-30T13:06:05.765Z · score: 20 (7 votes) · EA · GW

Due to concerns noted by other commenters, I've temporarily changed the author's name. (I think it's likely that the name was a pseudonym that would leave them difficult to identify, but I wanted to be careful nonetheless.)

I've contacted the author to figure out how we should proceed from here.

(Update: I've changed the name to a new pseudonym the author requested.)

Comment by aarongertler on RyanCarey's Shortform · 2020-01-29T13:13:35.696Z · score: 2 (1 votes) · EA · GW

I agree with this philosophy, but remain unsure about the extent to which strong material appears on various platforms (I sometimes do reach out to people who have written good blog posts or Facebook posts to send my regards and invite them to cross-post; this is a big part of Ben Kuhn's recent posts have appeared on the Forum, and one of those did win a prize). 

Aside from 1000-person-plus groups like "Effective Altruism" and "EA Hangout", are there any Facebook groups that you think regularly feature strong contributions? (I've seen plenty of good posts come out of smaller groups, but given the sheer number of groups, I doubt that the list of those I check includes everything it should.)

*****

I follow all the Twitter accounts you mentioned. While I can't think of recent top-level Tweets from those accounts that feel like good Prize candidates, I think the Tom Inglesby thread is great!

One benefit of the Forum Prize is that it (ideally) incentivizes people to come and post things on the Forum, and to put more effort into producing really strong posts. It also reaches people who deliberately worked to contribute to the community. If someone like Tom Inglesby was suddenly offered, say, $200 for writing a great Twitter thread, it's very unclear to me whether this would lead to any change in his behavior (and it might come across as very odd). Maybe not including any money, but simply cross-posting the thread and granting some kind of honorary award, could be better.

Another benefit: The Forum is centralized, and it's easy for judges to see every post. If someone wants to Tweet about EA and they aren't already a central figure, we might have a hard time finding their material (and we're much more likely to spot, by happenstance, posts made by people who have lots of followers).

That said, there's merit to thinking about ways we can reach out to send strong complimentary signals to people who produce EA-relevant things even if they're unaware of the movement's existence. Thanks for these suggestions!

Comment by aarongertler on Ramiro's Shortform · 2020-01-29T10:34:10.133Z · score: 2 (1 votes) · EA · GW

I'm not familiar with academic philosophy/how Philpapers is typically used. Can you say more about what you'd expect the positive outcome(s) to be if EAs volunteer to help out? I can imagine that this might improve the quality of papers on EA-adjacent topics, but your mention of volunteers always being up-to-date on the literature makes me wonder if you're also thinking of beneficial learning for the volunteers themselves.

Comment by aarongertler on RyanCarey's Shortform · 2020-01-29T10:23:40.918Z · score: 2 (1 votes) · EA · GW

I read every Tweet that uses the phrase "effective altruism" or "#effectivealtruism". I don't think there are many EA-themed Tweets that make novel points, rather than linking to existing material. I could easily be missing Tweets that don't have these keywords, though. Are there any EA-themed Tweets you're thinking of that really stood out as being good?

Comment by aarongertler on AMA: We are Jon and Kathryn. We work with The Life You Can Save. Ask us anything! · 2020-01-27T10:49:29.130Z · score: 4 (2 votes) · EA · GW

How does TLYCS evaluate its own impact as a "meta" charity? If you imagine the sentence "we expect that giving $10 to us generates $X in giving across our portfolio of charities/other EA-aligned charities", what would X be, and how did you come up with that number?

Sub-questions related to this (no need to address them all!):

  • Could you describe how you decide whether to attribute giving/other positive events to your own counterfactual influence?
  • How (if at all) do you track later giving by participants in your Giving Games?
Comment by aarongertler on Doing good is as good as it ever was · 2020-01-27T08:31:53.963Z · score: 6 (5 votes) · EA · GW

Thanks for sharing more details on your perspective.

For context, I've been following GiveWell since 2012 and took the Giving What We Can pledge + started Yale's EA group in 2014. But I wasn't often in touch with people who worked at EA orgs until 2017.

My job puts me in touch with a lot of new people (e.g. first-time Forum posters, people looking to get into EA work), and I find them to be roughly as enthusiastic as the student group members I've worked with. But that's often tempered by a kind of conservatism that seems to come from EA messaging -- they're more concerned about portraying ideas poorly, accidentally causing harm through their work, etc. 

This may apply less to more experienced people, though I wonder how much of the feeling of "insufficiency" is closer to a feeling of deeper uncertainty about whether the person in question is focusing on the right things, given the number of new causes and ways of thinking about EV that have become popular since the early years.

Overall, I think you're better-positioned to make this evaluation than I am, and I'm really glad that this post was written.

Comment by aarongertler on Khorton's Shortform · 2020-01-25T06:29:58.524Z · score: 4 (2 votes) · EA · GW

I don't think there's anything necessary or inevitable about it! My sentiments reflect things I've seen other people say (e.g. "I don't know if I count as an 'effective altruist', I'm new here/don't have belief X"), but how people feel about this and other identity questions is (of course) all over the map. And as I said, I have no problem with anyone referring to themselves as an effective altruist -- I just don't have a problem with the opposite, either.

To use the church analogy: If some people at a church call themselves "Christians", others "Southern Baptists", others "religious seekers", others "spiritual", and still others "agnostic/uncertain", I wouldn't expect that to make things less comfortable for newcomers. (Though attending Unitarian church as a kid might have left me biased in this area!) 

I agree that there are many reasons someone might feel uncomfortable at a conference or community event, and I think we both see the particular question of when to use "effective altruist" is just one tiny facet of community cohesion.

Comment by aarongertler on Is fundraising through my hobbies an effective use of my time? · 2020-01-24T00:14:24.107Z · score: 3 (2 votes) · EA · GW

As someone who returned to a game from childhood and then began to stream it, I recommend the strategy of "start and see how you feel". Feeling obligated to do something regularly can suck the joy out of it if you aren't careful, and it would be really sad for that to happen with, of all things, video games.

 

Comment by aarongertler on Khorton's Shortform · 2020-01-23T10:46:22.867Z · score: 7 (5 votes) · EA · GW

My perspective (which may not differ too much from yours -- just thinking out loud, Shortform-style): 

I try to avoid using "effective altruist" as a noun for what I think of as "members of the EA community" or "people interested in effective giving/work", because I want the movement to feel very open to people who aren't ready to label themselves in that way.*

For example:

  • I like thinking of EA Global as "a conference for people who share a small set of common principles and do a wide variety of different things that they believe to be aligned with those principles", rather than "a conference for people who think of themselves as effective altruists". If you come to our conference regularly, I default to seeing you as a member of our community unless you tell me otherwise, but I don't default to seeing you as an "effective altruist". 
  • If you have strong and well-researched views on global health and development, I'd love to have you at my EA meetup even if you're not very interested in the EA movement.

I support anyone who wants to identify themselves as an effective altruist, and I'm comfortable referring to myself as such, but I don't feel any desire to push people toward adopting that term if their inclination is to answer "are you an EA?" by talking about their values and goals, rather than their group affiliation.

*There's also the tricky bit where calling oneself "effective" could be taken to indicate that you're relatively confident that you're having a lot of impact compared to your available resources, which many people in the community aren't, especially if they focus on more exploratory work/cause areas.

Comment by aarongertler on Doing good is as good as it ever was · 2020-01-23T10:35:39.692Z · score: 20 (8 votes) · EA · GW

Unfortunately, some people in the EA community don’t feel as happy about the amount of good they can do as they did in the past. This is true even when the amount of good they are doing or can expect to do hasn’t decreased [...] I am not sure how to revert this adaptation on a community wide level. 

What makes you think that this feeling has become at least somewhat prevalent within the community, beyond one or two people? Just personal experience?

Ordinarily, I'd expect to see a "baseline" where some people feel happier/more motivated over time, others feel less happy/motivated, and the end result is something of a wash. I read a lot of EA material and talk to a lot of people, and I haven't gotten the sense that people are less motivated now, but my impressions could differ from yours for many reasons, including:

  • We're just talking to different people
  • I'm getting the wrong impression of people I talk to
  • I didn't see data from a survey or something like that
  • There are confounding effects that disguise "feeling less happy about one's potential to do good" (e.g. "feeling more happy about being part of the EA community as it grows and matures")
Comment by aarongertler on Is fundraising through my hobbies an effective use of my time? · 2020-01-23T10:28:02.602Z · score: 2 (1 votes) · EA · GW

Making money from Twitch or blogging is very difficult. I think you'll enjoy the process of blogging/streaming a lot more if you aren't doing it with revenue in mind, at least until you reach a reasonable following.

For perspective: Twitch streamers make ~$3.50/month/subscriber. If your job pays, say, $50/hour, you'd need 100 Twitch subscribers (which I'd guess would take months to accomplish even if your stream is polished and highly watchable from the beginning) to pull in revenue equivalent to working seven more hours each month. 

(Of course, you may not be able to "add hours" in that fashion, but I still find the comparison helpful -- and you may be able to do some freelance consulting or something like that.)

This seems like a classic case of it being okay to have more than one goal. Your hobbies can be hobbies without having to be impactful; your work, if it is your primary source of impact, seems like a better place to focus if you want to boost that impact.

*****

All that aside, if you do build a strong following on a blog or stream, that can be a good opportunity to advertise effective giving, and it probably won't hurt donations if you mention that they go toward really good charities.* I've considered doing this if I ever open my own Twitch stream up for donations.

*...although, come to think of it, many people probably donate to streamers/bloggers in order to support their work/the cool person behind it. So perhaps the charity angle would hurt more than it helped?

Comment by aarongertler on Should EAs be more welcoming to thoughtful and aligned Republicans? · 2020-01-23T10:17:59.296Z · score: 4 (3 votes) · EA · GW

What leads you to think that American and British numbers are so different? Have you heard many EA-aligned Brits express support for the Conservatives, particularly across multiple election cycles? Is this mostly a guess based on British Conservatives being (generally) less right-leaning the American Republicans?

Comment by aarongertler on Should EAs be more welcoming to thoughtful and aligned Republicans? · 2020-01-23T10:15:51.322Z · score: 5 (3 votes) · EA · GW

I didn't see a pingback on Ozy's post about being more welcoming to conservatives, which leads me to think it wasn't linked here, but many of Ozy's points seem relevant!