Comment by aarongertler on Why doesn't the EA forum have curated posts or sequences? · 2019-03-24T21:58:19.003Z · score: 2 (1 votes) · EA · GW

I'm sorry if my framing was misleading: When this feature goes live on the Forum, other users will be able to use it freely. CEA still wants to have its own "collections" be as close to "definitive" as we can reasonably get, with occasional updates/added material.

Meanwhile, until the feature goes live, I'm considering ways to more reliably expose Forum visitors to collections of introductory material that already exist, like the material compiled on EA.org. Maybe a pinned post, or maybe a page that shows by default to non-logged-in users; that's still in the works.

Comment by aarongertler on The Home Base of EA · 2019-03-23T03:16:48.642Z · score: 4 (2 votes) · EA · GW

I really liked the visual/story description you gave for what joining a group could look like; I really appreciate how memorable an idea can be when presented in that style. In that story, I also recognized the way I've felt in many of my interactions with the EA community thus far, which makes me wonder whether I've gotten a skewed sense for what "most EA circles" spend time on.

I've been a part of four different EA groups, three of which were more focused around social activity than anything cerebral (Madison, San Diego, Yale). The exception (EA Epic, a corporate group) had members who lived far apart, mostly existed during a Wisconsin fall/winter, and always met after workdays, which made planning social activities a bit harder. But my general sense is that most EA groups actually are fairly social/inclusive in the way you propose.

(This may be part of why we're seen as quite welcoming, though survey bias is likely a stronger factor in that case.)

How much time groups spend on cerebral/meritocratic vs. social/inclusive activities might be a good thing to figure out through the EA Survey; I'll suggest it as a potential topic for this year.

Comment by aarongertler on EA Survey 2018 Series: How welcoming is EA? · 2019-03-23T03:14:41.453Z · score: 4 (2 votes) · EA · GW

Note: I don't know very much about mental health, and the first two paragraphs of this comment are highly speculative.

That would be my theory, though I might not use the word "sensitive". I'd think that part of the effect (probably most of it) has something to do with lower average happiness and/or higher rates of depression/anxiety among people who prioritize mental health.

I'd guess that people who strongly support that cause are more likely to have direct experience with mental health issues than other people in EA. Having a lower level of happiness/life satisfaction could then translate into generally lower "scores" on surveys asking about many different positive feelings, including "how welcome you feel".

Of course, mental health isn't a very well-supported cause area within EA, so it could also be the case that people who favor it have a hard time finding other people in EA who share their level of support. It's probably much easier to find someone who knows a lot about animal advocacy at an EA event than to find someone who knows a lot about mental health as a cause area, and 1-on-1 conversations are a big driver of "feeling welcome".

(Anecdotally, experiencing intermittent mild-to-moderate depression over the last few years seems to have made me more likely to read about EA work in mental health. Empathy tends to influence the causes to which I am emotionally drawn, inside or outside of EA.)

Comment by aarongertler on Why doesn't the EA forum have curated posts or sequences? · 2019-03-22T01:51:33.098Z · score: 6 (5 votes) · EA · GW

My intuition, having seen proposals from people both inside and outside of CEA, is that this collation will almost certainly take longer than a week or two:

  • A higher standard than "broadly acceptable" seems important, since whatever posts are chosen will be seen as having CEA's endorsement (assuming CEA is the one doing the collation). A few critics can contribute a lot of negative public feedback, and even a single unfortunate line in a curated post may cause problems later.
  • I also think there's a lot of value to publishing a really good collection the first time around:
    • Making frequent revisions to a "curated" collection of posts makes them look a lot less curated, and removes comments from the public eye that authors may have worked on assuming they'd stick around.
    • It's also not great if Post A is chosen for curation despite Post B being a much stronger take on the same subject; assembling a collection of posts that are roughly the best posts on their respective topics takes a lot of experience with EA content and consultation with other experienced people (no one has read everything, and even people who've read almost everything may differ in which pieces they consider "best").

That said, the task is doable, and I'm consulting with other CEA staff who work on the Forum to draft a top-level answer about our plans for this feature.

Comment by aarongertler on I'll Fund You to Give Away 'Doing Good Better' - Surprisingly Effective? · 2019-03-21T23:48:35.421Z · score: 3 (2 votes) · EA · GW

I like that wording, and don't have any changes to suggest.

Comment by aarongertler on Why doesn't the EA forum have curated posts or sequences? · 2019-03-21T23:43:47.833Z · score: 7 (2 votes) · EA · GW

Here's the post I believe Yannick was thinking of. (Find the phrase "core series of posts".)

This is still something we plan to do in the future; I'm consulting with other CEA staff who work on the Forum to draft a top-level answer to Richard's question.

Comment by aarongertler on Request for comments: EA Projects evaluation platform · 2019-03-21T07:10:22.397Z · score: 16 (7 votes) · EA · GW

I share Habryka's concern for the complexity of the project; each step clearly has a useful purpose, but it's still the case that adding more steps to a process will tend to make it harder to finish that process in a reasonable amount of time. I think this system could work, but I also like the idea of running a quick, informal test of a simpler system to see what happens.

Habryka, if you create the "discussion thread" you've referenced here, I will commit to leaving at least one comment on every project idea; this seems like a really good way to test the capabilities of the Forum as a place where projects can be evaluated.

(It would be nice if participants shared a Google Doc or something similar for each of their ideas, since leaving in-line comments is much better than writing a long comment with many different points, but I'm not sure about the best way to turn "comments on a doc" into something that's also visible on the Forum.)

Comment by aarongertler on EA jobs provide scarce non-monetary goods · 2019-03-21T06:59:44.226Z · score: 9 (3 votes) · EA · GW

Good post! I share Greg's doubts about the particular question of salaries (and think that lowering them would have several bad consequences), but I think you've summed up most of the major things that people get, or hope to get, from jobs at EA organizations.

Other than your reasons and "money", I'd include "training"; if you want to learn to do Open Phil-style research, working at Open Phil is the most reliable way to do this.

When I started at GiveWell, I was surprised at how people in these circles treated me when they found out I was working there, even though I was an entry-level employee.

Are there any examples of this that stand out to you? I can certainly believe that it happened, but I'm having trouble picturing what it might look like.

(Since I began working at CEA five months ago, I haven't noticed any difference in the way my interactions with people in EA have gone, save for cases where the interaction was directly related to my job. But perhaps there are effects for me, too, and I just haven't spotted them yet.)

Somewhat mixed in with the above points, I think there's a lot of value to be had from feeling like a member of a tribe, especially a tribe that you think is awesome. I think working at a professional EA organization is the closest thing there is to a royal road to tribal membership in the EA community.

I think you're right that EA work is a quick way to feel like part of the tribe, and that's something I'd like to change.

So I'll repeat what I've said in the comments of other posts: If you believe in the principles of EA, and are taking action on them in some way (work, research, donations, advocacy, or taking steps to do any of those things in the future), I consider you a member of the EA "tribe".

I can't speak for any other person in EA, but from what I've heard in conversations with people at many different organizations, I think that something like my view is fairly common.

Comment by aarongertler on EA London Community Building Lessons Learnt - 2018 · 2019-03-21T02:04:00.239Z · score: 5 (3 votes) · EA · GW

https://80000hours.org/podcast/episodes/kelsey-piper-important-advocacy-in-journalism/

Comment by aarongertler on I'll Fund You to Give Away 'Doing Good Better' - Surprisingly Effective? · 2019-03-21T02:02:08.241Z · score: 2 (1 votes) · EA · GW

That version does sound better. One more suggested version:

Thank you for taking the time to share what you've done. Since we also asked about your future plans, could we follow up with one more short survey a year from now, to see what happened?

If that's alright with you, please enter your email address below - it will not be shared with anyone, or used for any other purpose.

I'm hoping this feels a bit less high-pressure than "what you may still do", but you could also remove "to see what happened" to help with that.

Comment by aarongertler on Sharing my experience on the EA forum · 2019-03-21T01:55:51.911Z · score: 4 (3 votes) · EA · GW

I agree that this doesn't run into the first two problems, though it could make giving anonymous feedback even more tempting. More practically, it seems like it would be pretty annoying to code, and provide less value than similarly tech-intensive features that are being worked on now. If I hear a lot of other calls for an "anonymous feedback" option, I may consider it more seriously, but in the meantime, I'll keep pushing for open, honest criticism.

I haven't read every comment on every post, but so far, I've seen barely any posts or comments on the new version of the Forum where someone was criticized and reacted very negatively. Mostly, reactions were like this post (asking for more details) or showed someone updating their views/adding detail and nuance to their arguments.

Comment by aarongertler on EA London Community Building Lessons Learnt - 2018 · 2019-03-19T23:51:37.985Z · score: 2 (1 votes) · EA · GW
When you are events focused, you are competing with many things - family, friends, hobbies, Netflix, cinema, etc. If your focus is more on helping people doing good, it’s no longer about having people turn up to an event, it’s about keeping people up to date with relevant info that is helpful for them. When there is a relevant opportunity for them to do something in person, they might be more inclined to do so.

I really like this point, and the related Kelsey Piper quote. EA, like any social movement, is likely to grow and succeed largely based on how helpful it is for its members. Having a "what can I do for you?" mindset has been really useful to me in my time running a couple of different EA groups (and working at CEA).

--

When you say that Meetup.com "gave a worse impression of effective altruism", do you mean that it actually seemed to have negative value, or just that it was worse than Facebook because it didn't give you an easy way to contact people soon after they'd joined? If the former, can you talk about any specific negative effects you noticed? (One of the groups I'm affiliated with is still using Meetup, so I'm quite curious about this.)

Comment by aarongertler on I'll Fund You to Give Away 'Doing Good Better' - Surprisingly Effective? · 2019-03-19T23:39:46.918Z · score: 7 (6 votes) · EA · GW

Fantastic post, Jeremy! I'm a bit biased, since I had the chance to see earlier drafts, but I really like the generous spirit of this initiative, and it seems like a low-risk, high-potential way to grow the community. It's very kind of you to offer funding to others who want to try their own giveaways.

In fact, I might just try this myself come Giving Season; I've set a reminder in my calendar to think about it on November 15th. Thanks for the idea.

Regarding the survey: Consider changing the wording on question #9:

The bulk of the impact from introducing people to Effective Altruism probably happens over the long term. If you think you might make future changes, or you generally agree with the principles of the book, we'd love to be able to check in with you in a year, to see how things are going.

I'd remove the section in bold. If people are really interested in EA, they'll hopefully give you contact information either way; if they're on the fence, they might feel a bit objectified being referred to as sources of impact, or guilty about donating once and planning not to do so in the future (I can imagine giving $100 to GiveWell, then seeing the survey and losing my warm glow because I haven't had "the bulk of my impact").

This is a highly speculative suggestion, though, and I don't think it makes a big difference either way.

Comment by aarongertler on Sharing my experience on the EA forum · 2019-03-19T23:37:14.504Z · score: 7 (5 votes) · EA · GW

I don't love the idea (suggested by one comment here) of having separate anonymous feedback, for these reasons:

  • Public feedback allows people to upvote comments if they agree (very efficient for checking on how popular a view is)
  • Public feedback makes it easier for the author to respond
  • Most importantly, public feedback generally strengthens our norm of "it's okay to criticize and to be criticized, because no one is perfect and we're all working together to improve our ideas".

Of course, these factors have to be balanced against the likelihood that anonymous feedback mechanisms will allow for more and more honest feedback, which is a considerable upside. But I'd hope that the EA community, of all groups, can find a way to thrive under a norm of transparent feedback.

Comment by aarongertler on Sharing my experience on the EA forum · 2019-03-19T23:34:33.304Z · score: 19 (8 votes) · EA · GW

It looks like Jan's comment on your other post was heavily upvoted, indicating general agreement with his concerns, but I'd hope that people with other concerns would have written about them.

I've recommended before that people try to avoid downvoting without either explaining their reasoning or upvoting a response that matched their views. I've been happy to see how common this is, though there's still room for improvement.

Please keep posting and sharing your ideas -- one of the Forum's core purposes is "helping new people with ideas get feedback", and no one entered the EA community with only good ideas to share. (As far as "initial experience with forum use" goes, you're still doing a lot better than GiveWell's Holden Karnofsky circa 2007.)

Comment by aarongertler on Concept: EA Donor List. To enable EAs that are starting new projects to find seed donors, especially for people that aren’t well connected · 2019-03-19T23:17:33.281Z · score: 2 (1 votes) · EA · GW

I agree with this point. Even in the startup world, where due diligence is common, most projects fail after spending a lot of money, achieving very little impact in the process.

In the case of EA projects, even a project that doesn't have negative value can still lead to a lot of "waste": There's a project team that spent time working on something that failed (though perhaps they got useful experience) and one or more donors who didn't get results.

Hits-based giving (which focuses on big successes even at the cost of some failure) is a useful approach, but in order for that to work, you do need a project that can at least plausibly be a hit, and no idea is strong enough to create that level of credibility by itself. Someone needs to get to know the team's background and skills, understand their goals, and consider the reasons that they might not reach those goals.

Side note: I hope that anyone who independently funds an EA project considers writing a post about their decision, as Adam Gleave did after winning the 2017 donor lottery.

Comment by aarongertler on Justice, meritocracy, and triage · 2019-03-19T23:04:03.587Z · score: 2 (1 votes) · EA · GW

I like the use of the "non-X" concept (which is new to me) to explore post-scarcity, a topic that has been talked about a lot within EA. Something like a universal basic income has a lot of popular support among members of this community, and there's a lot of writing on "how good the world could be, if we do things right and don't experience a catastrophe from which we can't recover".

Some resources you might like, if you haven't seen them yet:

Comment by aarongertler on A guide to improving your odds at getting a job in EA · 2019-03-19T22:44:58.909Z · score: 13 (10 votes) · EA · GW

I agree with Denise's concerns about the time involved in following these suggestions, but I also think there are good lessons worth pointing out here. Some notes:

  • Consider that "EA organization" refers to a very small group of nonprofits, which collectively hire... 50 people each year? Remove GiveWell and the Open Philanthropy Project (which have their own detailed guidelines on what they look for in applicants), and I'd guess that the number drops by half or more. Many of the positions recommended by 80,000 Hours require deep expertise in a particular topic; research and volunteering can help, but questions of general EA knowledge/experience aren't likely to be as important. If you want to work on AI alignment, focus on reading CHAI's bibliography rather than, say, the EA Forum.
  • As far as volunteering, research, and other projects go, quality > quantity. Years of reading casually about EA and posting on social media don't hurt, but these factors aren't nearly as important as a work reference who raves about your skills as a volunteer, or a Forum post that makes a strong contribution to the area you want to work on.
If you want an operations job and you wrote a blog post about the comparison of top online operational resource courses, then you are a person EA organisations are interested in talking to.

This only holds true if the post was useful, helping EA orgs solve a problem they had or getting strong positive feedback from people who used it to select a course. There's a lot of writing in the EA blogosphere; much of it is great, but some posts just never find an audience. Again, quality > quantity; better to spend a lot of time figuring out which post idea is likely to have the most impact, then working on the best version you can produce, than to publish a lot of posts you didn't have the time to think about as carefully.

(This doesn't mean that the Forum itself doesn't encourage unpolished work -- we're happy to see your ideas! -- but that the writing most likely to demonstrate your practical skills is writing that you've polished.)

--

As an aside: I'm not a career coach by any means, but I've worked in EA operations and EA content, and I've talked to a lot of different organizations about what they look for in applicants. If you have particular questions about applying to an org in/adjacent to EA, you're welcome to comment here or email me (though it's possible that my advice will consist of "ask these questions to the organization" or "read this article they wrote about what they want").

--

I work for CEA, but these views are my own.

Comment by aarongertler on [Link] A Modest Proposal: Eliminate Email · 2019-03-18T20:24:32.499Z · score: 4 (3 votes) · EA · GW

Slack's not perfect, but here are some features I like:

  • Emotes let you "respond" to a message in less than a second with zero typing. At CEA, we have an "eyes" emote that means "I've seen this message", which saves me 30 seconds over sending a "thanks for sending this, I've read it" email. We have lots of other emotes that stand in for other kinds of quick messages. I send a lot less email at CEA than I did in my most recent corporate job, at a tech firm with pretty standard messaging practices.
  • Channels act as a proactive sorting system. CEA has an "important" channel for time-sensitive things that everyone should read and a "general" channel for things that everyone should read, but that aren't time-sensitive. If all the messages on those channels were emails, I'd wind up reading them all as they came in, but in Slack I can ignore most of them until I hit the time in my day when I want to catch up on messages, without spending any energy on sorting.

Slack also has a feature that lets you set "statuses" in the same way the HBR article discusses (e.g. "working on important thing, available after 4:00 pm"), which takes less time than writing an auto-reply and also doesn't add dozens of automated emails to other people's inboxes when they try contacting you.

Comment by aarongertler on The Importance of Truth-Oriented Discussions in EA · 2019-03-18T20:16:37.799Z · score: 11 (6 votes) · EA · GW

1. I'd really recommend finding a different phrase than "low levels of emotional control", which is both more insulting than seems ideal for conversations in an EA context and too vague to be a useful descriptor. (There are dozens of ways that "controlling one's emotions" might be important within EA, and almost no one is "high" or "low" for all of them.)

2. "Less welcoming for everyone else" is too broad. Accommodating people who prefer some topics not be brought up certainly makes EA less welcoming for some people: Competing access needs are real, and a lot of people aren't as comfortable with discussions where emotions aren't as controlled, or where topics are somewhat limited.

But having "high emotional control" (again, I'd prefer a different term) doesn't necessarily mean feeling unwelcome in discussions with people who are ideological or "less controlled" in some contexts.

One of the features I like most in a community is "people try to handle social interaction in a way that has the best average result for everyone".

I'd consider "we figure out true things" to be the most important factor we should optimize for, and our discussions should aim for "figuring stuff out". But that's not the only important result; another factor is "we all get along and treat each other well", because there's value in EA being a well-functioning community of people who are happy to be around each other. If having a topic consistently come up in conversation is draining and isolating to some members of the community, I think it's reasonable that we have a higher bar for that topic.

This doesn't mean abandoning global poverty because people think it seems colonialist; it might mean deciding that someone's Mormon manifesto doesn't pass the bar for "deserves careful, point-by-point discussion". That isn't very inclusive to the manifesto's author, but it seems very likely to increase EA's overall inclusiveness.

Comment by aarongertler on The Importance of Truth-Oriented Discussions in EA · 2019-03-18T08:07:08.377Z · score: 3 (5 votes) · EA · GW

I work for CEA, but the following views are my own. I don't have any plans to change Forum policy around which topics are permitted, discouraged, etc. This response is just my attempt to think through some considerations other EAs might want to make around this topic.

--

While we all have topics on which our emotions get the better of us, those who leave are likely to be overcome to a greater degree and on a wider variety of topics. This means that they will be less likely to be able to contribute productively by providing reasoned analysis. But further than this, they are more likely to contribute negatively by being dismissive, producing biased analysis or engaging in personal attacks.

I don't really care how likely someone is to be "overcome" by their emotions during an EA discussion, aside from the way in which this makes them feel (I want people in EA, like people everywhere, to flourish).

Being "overcome" and being able to reason productively seem almost orthogonal in my experience; some of the most productive people I've met in EA (and some of the nicest!) tend to have unusually strong emotional reactions to certain topics. There are quite a few EA blogs that alternate between "this thing made me very angry/sad" and "here's an incredibly sophisticated argument for doing X". There's some validity to trying to increase the net percentage of conversation that isn't too emotionally inflected, but my preference would be to accommodate as many productive/devoted people as we can until it begins to trade off with discussion quality. I've seen no evidence that we're hitting this trade-off to an extent that demands we become less accommodating.

(And of course, biased analysis and personal attacks can be handled when they arise, without our needing to worry about being too inclusive of people who are "more likely" to contribute those things.)

The people who leave are likely to be more ideological. This is generally an association between being more radical and more ideological, even though there are also people who are radical without being ideological. People who are more ideological are less able to update in the face of new evidence and are also less likely to be able to provide the kind of reasoned analysis that would cause other EAs to update more towards their views.

See the previous point. I don't mind having ideological people in EA if they share the community's core values. If their commitment to an ideology leads them to stop upholding those values, we can respond to that separately. If they can provide reasoned analysis on Subject A while remaining incorrigibly biased on Subject B, I'll gladly update on the former and ignore the latter. (Steven Pinker disagrees with many EAs quite sharply on X-risk, but most of his last book was great!)

Comment by aarongertler on The Importance of Truth-Oriented Discussions in EA · 2019-03-18T08:04:57.545Z · score: 10 (10 votes) · EA · GW

I work for CEA, but the following views are my own. I don't have any plans to change Forum policy around which topics are permitted, discouraged, etc. This response is just my attempt to think through some considerations other EAs might want to make around this topic.

--

Even when there is a cost to participating, someone who considers the topic important enough can choose to bear it.

This isn't always true, unless you use a circular definition of "important". As written, it implies that anyone who can't bear to participate must not consider the topic "important enough", which is empirically false. Our capacity to do any form of work (physical or mental) is never fully within our control. The way we react to certain stimuli (sights, sounds, ideas) is never fully within our control. If we decided to render all the text on the EA Forum at a 40-degree angle, we'd see our traffic drop, and the people who left wouldn't just be people who didn't think EA was sufficiently "important".

In a similar vein:

The more committed [you are] to a cause, the more you are willing to endure for it. We agree with CEA that committed EAs are several times more valuable than those who are vaguely aligned, so that we should [be] optimising the movement for attracting more committed members.

Again, this is too simplistic. If we could have 100 members who committed 40 hours/week or 1000 members who committed 35 hours/week, we might want to pursue the second option, even if we weren't "optimizing for attracting more committed members". (I don't speak for CEA here, but it seems to me like "optimize the amount of total high-fidelity and productive hours directed at EA work" is closer to what the movement wants, and even that is only partly correlated with "create the best world we can".)

You could also argue that "better" EAs tend to take ideas more seriously, that having a strong negative reaction to a dangerous idea is a sign of seriousness, and that we should therefore be trying very hard to accommodate people who have reportedly had very negative reactions to particular ideas within EA. This would also be too simplistic, but there's a kernel of truth there, just as there is in your statement about commitment.

Even if limiting particular discussions would clearly be good, once we’ve decided to limit discussions at all, we’ve opened the door to endless discussion and debate about what is or is not unwelcoming (see Moderator’s Dilemma). And ironically, these kinds of discussions tend to be highly partisan, political and emotional.

The door is already open. There are dozens of preexisting questions about which forms of discussion we should permit within EA, on specifically the EA Forum, within any given EA cause area, and so on. Should we limit fundraising posts? Posts about personal productivity? Posts that use obscene language? Posts written in a non-English language? Posts that give investing advice? Posts with graphic images of dying animals? I see "posts that discuss Idea X" as another set of examples in this very long list. They may be more popular to argue about, but that doesn't mean we should agree never to limit them just to reduce the incidence of arguments.

We note that such a conclusion would depend on an exceptionally high quantity of alienating discussions, and is prima facie incompatible with the generally high rating for welcomingness reported in the EA survey. We note that there are several possible other theories.

I don't think the authors of the Making Discussions Inclusive post would disagree. I don't see any conclusion in that post that alienating discussions are the main factor in the EA gender gap; all I see is the claim, with some evidence from a poll, that alienating discussions are one factor, along with suggestions for reducing the impact of that particular factor.

It is worthwhile considering the example of Atheism Plus, an attempt to insist that atheists also accept the principles of social justice. This was incredibly damaging and destructive to the atheist movement due to the infighting that it led to and was perhaps partly responsible for the movement’s decline.

I don't have any background on Atheism Plus, but as a more general point: Did the atheism movement actually decline? While the r/atheism subreddit is now ranked #57 by subscriber count (as of 13 March 2019) rather than #38 (4 July 2015), the American atheist population seems to have been fairly flat since 1991, and British irreligion is at an all-time high. Are there particular incidents (organizations shutting down, public figures renouncing, etc.) that back up the "decline" narrative? (I would assume so, I'm just unfamiliar with this topic.)

Comment by aarongertler on The Importance of Truth-Oriented Discussions in EA · 2019-03-18T08:03:05.226Z · score: 19 (11 votes) · EA · GW

I work for CEA, but the following views are my own. I don't have any plans to change Forum policy around which topics are permitted, discouraged, etc. This response is just my attempt to think through some considerations other EAs might want to make around this topic.

--

There were some things I liked about this post, but my comments here will mostly involve areas where I disagree with something. Still, criticism notwithstanding:

  • I appreciate the moves the post makes toward being considerate (the content note, the emphasis on not calling out individuals).
  • Two points from the post that I think are generally correct and somewhat underrated in debates around moderation policy: You can't please everyone, and power relations within particular spaces can look very different than power relations outside of those spaces. This also rang true (though I consider it a good thing for certain "groups" to be disempowered in public discussion spaces):
There is a negative selection effect in that the more that a group is disempowered and could benefit from having its views being given more consideration, the less likely it is to have to power to make this happen.
  • The claim that we should not have "limited discussions" is closing the barn door after the horse is already out. The EA Forum, like almost every other discussion space, has limits already. Even spaces that don't limit "worldly" topics may still have meta-limits on style/discourse norms (no personal attacks, serious posts only, etc.). Aside from (maybe?) 4Chan, it's hard to think of well-known discussion spaces that truly have no limits. For example, posts on the EA Forum:
    • Can't advocate the use of violence.
    • Are restricted in the types of criticism they can apply: "We should remove Cause X from EA because its followers tend to smell bad" wouldn't get moderator approval, even if no individually smelly people were named.

--

While I don't fully agree with every claim in Making Discussions Inclusive, I appreciated the way that its authors didn't call for an outright ban on any particular form of speech -- instead, they highlighted the ways that speech permissions may influence other elements of group discussion, and noted that groups are making trade-offs when they figure out how to handle speech.

This post also mostly did this, but occasionally slipped into more absolute statements that don't quite square with reality (though I assume one is meant to read the full post while keeping the word "usually" in mind, to insert in various places). An example:

We believe that someone is excluded to a greater degree when they are not allowed to share their sincerely held beliefs than when they are merely exposed to beliefs that they disagree with.

This seems simplistic. The reality of "exclusion" depends on which beliefs are held, which beliefs are exposed, and the overall context of the conversation. I've seen conversations where someone shoehorned their "sincerely held beliefs" into a discussion to which they weren't relevant, in such an odious way that many people who were strained on various resources (including "time" and "patience") were effectively forced out. Perhaps banning the shoehorning user would have excluded them to a “greater degree”, but their actions excluded a lot of people, even if to a “lesser degree”. Which outcome would have been worse? It’s a complicated question.

I'd argue that keeping things civil and on-topic is frequently less exclusionary than allowing total free expression, especially as conversations grow, because some ideas/styles are repellent to almost everyone. If someone insists on leaving multi-page comments with Caps Lock on in every conversation within a Facebook group, I'd rather ask them to leave than ask the annoyed masses to grit their teeth and bear it.

This is an extreme example, of course, so I'll use a real-world example from another discussion space I frequent: Reddit.

On the main Magic: The Gathering subreddit, conversations about a recent tournament winner (a non-binary person) were frequently interrupted by people with strong opinions about the pronoun "they" being "confusing" or "weird" to use for a single person.

This is an intellectual position that may be worth discussing in other contexts, but in the context of these threads, it appeared hundreds of times and made it much more tedious to pick out actual Magic: The Gathering content. Within days, these users were being kicked out by moderators, and the forum became more readable as a result, to what I'd guess was the collective relief of a large majority of users.

--

The general point I'm trying to make:

"Something nearly everyone dislikes" is often going to be worth excluding even from the most popular, mainstream discussion venues.

In the context of EA, conversations that are genuinely about effective do-gooding should be protected, but I don't think several of your examples really fit that pattern:

  • Corruption in poor countries being caused by "character flaws" seems like a non sequitur.
    • When discussing ways to reduce corruption, we can talk about history, RCT results, and economic theory -- but why personal characteristics?
    • Even if it were the case that people in Country A were somehow more "flawed" than people in Country B, this only matters if it shows up in our data, and at that point, it’s just a set of facts about the world (e.g. “government officials in A are more likely to demand bribes than officials in B, and bribery demands are inversely correlated with transfer impact, which means we should prefer to fund transfers in B”). I don't see the point of discussing the venality of the A-lish compared to the B-nians separately from actual data.
  • I think honest advocates for cash-transfer RCTs could quite truthfully state that they aren't trying to study whether poor people are "lazy". Someone's choice not to work doesn't have to be the target of criticism, even if it influences the estimated benefit of a cash transfer to that person. It's also possible to conclude that poor people discount the future without attaching the "character flaw" label.
    • Frankly, labels like this tend to obscure discussion more than they help, by obscuring actual data and creating fake explanations ("poor people don't care as much about the future, which is bad" < "poor people don't care as much about the future, but this is moderated by factors A and B, and is economically rational if we factor in C, and here's a model for how we can encourage financial planning by people at different income levels").
    • The same problem applies to your discussion of female influence and power; whether or not a person's choices have led them to have less power seems immaterial to understanding which distributions of power tend to produce the best outcomes, and how particular policies might move us toward the best distributions.

To summarize the list of points above: In general, discussions of whether a state of the world is "right", or whether a person is "good" or "deserving", don't make for great EA content. While I wouldn't prohibit them, I think they are far more tempting than they are useful, and that we should almost always try to use "if A, then B" reasoning rather than "hooray, B!" reasoning.

Of course, "this reasoning style tends to be bad" doesn't mean "prohibit it entirely". But it makes the consequence of limiting speech topics seem a bit less damaging, compared to what we could gain by being more inclusive. (Again, I don’t actually think we should add more limits in any particular place, including the EA Forum. I’m just pointing out considerations that other EAs might want to make when they think about these topics.)

Comment by aarongertler on Spencer Greenberg survey on animal welfare · 2019-03-18T07:35:05.902Z · score: 2 (1 votes) · EA · GW

1. I'd recommend turning this into a Question post (you can do this using the "Ask Question" link in the same menu you used to create your post). This isn't mandatory, but quick questions like this are the reason we built the Question feature.

2. I'll send this post to Spencer and see whether he knows of a public link to the survey's results.

Comment by aarongertler on Potential funding opportunity for woman-led EA organization · 2019-03-18T07:31:23.983Z · score: 2 (1 votes) · EA · GW

J-PAL would be very unlikely to qualify (they may have done a bit of work in Mexico, but I'm not aware of them having any coverage in the U.S. or Canada). I'd recommend checking that before you take the time to nominate them. (Still, thanks for broadcasting this award in the first place; it's nice to know that people are keeping an eye out for good funding opportunities like this.)

Comment by aarongertler on [Link] A Modest Proposal: Eliminate Email · 2019-03-18T07:28:14.855Z · score: 3 (2 votes) · EA · GW

As far as I'm aware, most of the biggest EA organizations are heavy users of Slack, which is somewhat better on these fronts than email. They're also generally friendly to researchers who have a personal policy of checking email infrequently (as it's widely recognized how distracting email can be).

I'm in favor of much of what this article recommends; I just think we're on that path already. (I'd be interested to see concrete anti-email suggestions that could push us even further, though!)

Comment by aarongertler on Effective Altruism and Meaning in Life · 2019-03-18T07:18:46.776Z · score: 18 (13 votes) · EA · GW

Notes from an angel of feedback who generally liked the post, but as usual, will comment mostly when he sees a chance to be constructive or invite a response:

  • Thanks for using headers, summarizing the post up front (including the reasoning for the unusual style), and asking for feedback at the end (in a creative, good-humored way).
  • The style started out funny/light, but started to weigh down the prose by the "epiphany" section. Even in jest, I'm wary of using the name "St. X" to describe anyone in EA; that can be easy to misinterpret for outside readers, and I'd guess that it would also make some of the people so described pretty uncomfortable.
  • The fundamental point I took away from this is one I've also argued for in recent weeks: If you want to live by something like EA principles, you should try to do the best you can with the resources you have to offer ("have to offer" =/= "have", you don't need to give everything you can spare or even close to that amount).
    • On the career front, this means you should do some kind of work that is some combination of intrinsically and consequentially fulfilling. It's good to aim for what you believe to be the highest-impact work you can do, but you aren't "obligated" to optimize your career (just like you aren't "obligated" to do anything else).
    • And if you don't get a job you applied for, that doesn't imply that you should feel despair, or like the person who did get the job is somehow "better" than you. There's a very big difference between "people who seem to be doing the most impactful work" and "people who are the most 'legit'/'respectable' in the community". If you believe in the principles of EA and are trying to live in a way that does as much good as possible for other people given the resources you have to offer, you don't have anything left to prove.
    • If three people try applying to the same jobs and end up in positions A, B, and C, and it turns out that, a century from now, we know that B was the highest-impact job, this doesn't mean that the person with that job was "more important" or "better" or anything like that. What matters is that each person tried to find a way to have an impact as best they could. We're all part of the story of effective altruism.
      • Claudette Colvin's activism was very similar to that of Rosa Parks. It turned out that Rosa Parks became much more famous, with a story that was more influential in the Civil Rights movement. This doesn't make Parks a "better Civil Rights participant" than Colvin.

Regarding the artists you mentioned:

    • Yes, people with outstanding talent in an area should be wary of giving that up to focus on something that seems more effective. Being world-class at anything can be really impactful.
    • I'd be remiss not to point out that almost all of history's would-be Spinozas, Beethovens, Kahlos, and Vonneguts never produced anything that stood the test of time. It's possible that for every musician who makes a good decision by not becoming an accountant, there are two musicians who make poor decisions by not becoming accountants. (Of course, thinking about ways to make an impact through your art/talent can be a very promising path, whether that means "convincing fellow poker professionals to give to EA charities" or "putting on concerts at EA Global".)
    • Every historical issue can be approached in many ways, some of which are more likely to work than others. Even if we assume that MLK et al. found the optimal strategies for their situations, there are important differences between segregation, factory farming, and AI risk. You can find a historical analogue for nearly anything you want to do, but it's still better to look at the features of your situation, consider your options, and choose something that seems like the best fit for the specific case at hand.
Altruistic are those who hunger and thirst for positive world-change, for the expected result is such transformation.

The "expected result" seems to be "very little or no change", based on the track record of people who have tried to change the world over the millennia. Many revolutions are bloody failures. Many movements peter out and vanish, or find themselves on the wrong side of history. Hungering and thirsting help, but thinking, planning, and taking action are paramount.

Despite hundreds of thousands of monthly unique website visitors and tens of thousands of listeners for each podcast, 80K's job board typically has contained only a few dozen jobs. Predictably, many hundreds of people began applying for the same positions at the likes of Open Phil. Incredibly talented people started spending many months applying to a dozen or two EA jobs only to wind up empty-handed.

I don't know about historical numbers, but the 80K board currently has a number of jobs in the 150-200 range, plus a list of recommended organizations that might be able to create a new job for the right candidate. There are also many other resources for jobs that are EA-adjacent, EA-aligned, or at least "promising ways to get involved in something that could help you make an impact later". EA jobs are less rare than they seem at first.

Part of this feeling seems to come from something like the Friendship Paradox, though perhaps there's an even better mathematical analogue I'm missing:

If 900 people apply to Open Phil and 100 people apply across 19 other jobs, the EA applicant community will consist of 95% not-super-competitive jobs and 90% people who experienced a lot of competition. Does that make the EA job market competitive, or does it mean that most applicants have quite narrow preferences about the work they want to do?*

(I applied to ~10 positions across EA last year, and in a few cases, was one of three or fewer applicants.)

*This is a trick question. The answer is "both things are true to some extent", which is almost always the answer in this situation.

--

I work for CEA, but these views are my own.

Comment by aarongertler on Open Thread #44 · 2019-03-15T20:08:24.033Z · score: 2 (1 votes) · EA · GW

In my experience, the "most of our lives indoors" topic comes up in conversation at least occasionally within EA, sometimes accompanied by discussion of how important it is to go outside and move around. I often tell people that I go "stir-crazy" if I don't get in a long walk at least once a day, and I've gotten zero confused reactions and quite a few "me too" reactions.

I don't think the topic gets written about very often, but these Slate Star Codex posts (1, 2) did catalyze a lot of experiments with indoor plants.

Comment by aarongertler on Potential funding opportunity for woman-led EA organization · 2019-03-14T02:48:26.605Z · score: 6 (4 votes) · EA · GW

I'd recommend changing the title of this post to something like: "Funding opportunity for woman-led EA organization", so that people know what it's about before clicking (and to reduce the chance that people confuse it for a fundraising post, which I did at first).

Comment by aarongertler on Bayesian Investor proposes you can predictably beat the market by ~3% following a simple and easy strategy · 2019-03-14T02:47:08.249Z · score: 6 (4 votes) · EA · GW

I think that the post's points on markets not being fully efficient are reasonable, but I also think that any reliable strategy will see its value get eaten pretty quickly, unless it relies on something that only a few people know about.

It seems like the author agrees:

All of the approaches I’ve mentioned are likely to outperform by less than history suggests, due to the increased amount of investment attempting to exploit them. That increased investment over the past decade or two doesn’t look large compared to the magnitude of the inefficiencies, but if trading volume on these etf’s becomes comparable to that of leading etf’s, I’d expect the benefits of these etf’s to go to zero.

The 3% number may already be accounting for this trend, but even if it does, this approach comes with (I assume) a bit more risk than a standard index-fund strategy, plus the need to occasionally rebalance one's portfolio, the fear that comes with sometimes underperforming the market, etc.

And of course, the outside view lets us notice that nearly every person who ever thought they could reliably beat the market was wrong, including many people who were at least as well-informed as the author. That reduces my expectation for the benefits of this strategy. I'd put higher credence on it working that I would for most investor-recommended strategies, since I have a pretty high opinion of the author's past work, but I wouldn't go so far as to advocate that any particular EA follow the strategy to the letter (especially since each person has their own financial strategy/goals).

My approach, as someone who doesn't want to spend a lot of time thinking about small percentage gains/messing around with rebalancing and such, is to just use Betterment (Wealthfront is equally good), an index-fund startup that handles rebalancing and tax benefits for you. I've had solid, slightly-above-market returns from them for years, and the interface is really good.

Comment by aarongertler on The Importance of Truth-Oriented Discussions in EA · 2019-03-14T02:32:47.931Z · score: 15 (10 votes) · EA · GW

In general, for posts like this that lay out an argument point by point, I'd strongly recommend adding section headers (highlight text in your editor and click the "T" button to create a header). This will give you a cool floating table of contents next to your post and make it easier to navigate.

(To see what this would look like, see this post for one example.)

Comment by aarongertler on Potential funding opportunity for woman-led EA organization · 2019-03-14T02:29:16.795Z · score: 4 (3 votes) · EA · GW

This is a fine place to post questions like this!

A few EA-aligned organizations with female leaders (not a comprehensive list):

J-PAL may also be the most "entrepreneurial" organization on the list, though SCI's direct work seems to make them a closer fit for the types of organizations that have won in past years.

Questions:

1. Do you have any personal connection to the award/program?

2. Do you know how many charities were nominated last year?

3. Are you planning to nominate an organization, and would it help if multiple people nominated the same organization?

Comment by aarongertler on Unsolicited Career Advice · 2019-03-13T07:41:44.642Z · score: 4 (2 votes) · EA · GW

I don't think many EA organizations have implicit experience barriers of this kind. The roles that have been mostly hotly contested, with the most applicants, really are "generalist" positions (e.g. research associates at GiveWell or Open Phil, where more or less everyone [I think] gets to take the first basic work test, and credentials don't seem like they matter if your work tests are strong).

Meanwhile, many positions at CEA (where I currently work) are filled by people who'd never done that kind of work in a professional setting before, or who only had experience with some (but not all) aspects of the job. My job is to write and edit, and my only prior "paid" writing/editing experience was in random freelance work here and there plus a few bits of journalism I wrote as a college student in ~2013, adding up to maybe one year of "full-time" experience.

That said, there may be non-experience "barriers" that should come into play earlier, relating to a candidate's skill in a particular area... but that seems like "raise standards for your initial round of work tests", not "try to stop people from applying in the first place".

Comment by aarongertler on Charity ranking sheet I made · 2019-03-13T03:03:34.566Z · score: 4 (3 votes) · EA · GW

You'll probably get more informative comments if you include a bit more context around this ratings sheet. What led you to select these particular charities to focus on? What are some of the ways your model differs from those of GiveWell/ACE for charities they also evaluate?

Also, cells with comments in them have only a tiny tag to indicate this, which can be hard to see on a sheet this expansive and colorful. I'd recommend listing citations on a separate sheet for ease of use (or something along those lines).

Comment by aarongertler on SHOW: A framework for shaping your talent for direct work · 2019-03-13T01:53:57.592Z · score: 8 (5 votes) · EA · GW

Great post! It seems to line up with much of 80K's advice on building career capital, while adding additional points I don't think I've seen articulated by anyone else (or at least, not so clearly).

Organizations like MIRI and FHI have hundreds of applications per year for researcher roles, whereas the number of people per year who ask to join as research assistants are something like thirty times lower.

This is a really useful thing to point out. During my 2018 application process for EA jobs, I was one of several hundred research applicants to Open Phil and one of very few applicants to several operations or executive-assistant positions at other research organizations. In at least one case, I may have been the only applicant.

Ryan/Tegan: Did you get your "something like thirty times lower" estimate from any particular research organization(s)? Is it a best guess based on your participation in someone's hiring process, or some other element of your personal experience?

Of the four strategies, getting weird is probably the riskiest, and the one fewest people are suited for. Projects chosen at random from the list are overwhelmingly likely to be of no value whatsoever, so you’d have to rely on your (probably untested) ability to choose well in the face of little evidence.

Also really useful. I love lists of ideas for things to work on, and it's good to consider a wide range of things you could be doing, but the first step you take in any potential long-term project should involve serious analysis of the project's paths to impact and expected value.

You should also be thinking about ways to test your idea as quickly as you can, if that makes sense for your style of work. The Lean Startup is a classic book on this topic. (I liked it much more than I expected to; it's not just about the bits that have been quoted ad infinitum by founders.)

--

(I work for CEA, but these views are my own.)

Comment by aarongertler on I Am The 1% · 2019-03-13T01:40:58.685Z · score: 6 (4 votes) · EA · GW

I like that you cited multiple sources with different perspectives on the world income distribution, rather than simply asserting a single number that someone could easily challenge. It's important to recognize the uncertainty associated with data like this (though it's also important, of course, to accept that any realistic numbers would still lead to the same basic point about how globally wealthy the average developed-world citizen is).

Regarding the video:

  • You've done a very good thing by making a generous donation to Fistula Foundation! My comments on the form/text of the video don't take away from the fact that you've contributed to making a difference in the lives of suffering people.
  • In the past, many EA community groups ran Live Below the Line fundraisers, where members would live for a week using only a tiny amount of money (below the global extreme poverty line), subsisting on small amounts of basic food (but living in their usual homes/dorms/etc.). This seems similar to your campaign here.
    • In my experience, this led to some positive feedback, but also a lot of reasonable negative feedback. The latter came from people who pointed out that a short stint of temporary poverty was quite different from the lived experience of poverty, and that suggesting any sort of comparison between the experience of a hungry college student and someone living on $750/year wasn't a good idea. I'm neutral on that question myself, but empirically, there are risks to any fundraiser that involves or even hints at this kind of comparison.
  • The quote "if you think this lifestyle is a little odd, you are out of touch with how most people in the world live" seems harsh, and fairly inaccurate; only a small fraction of the global population is "homeless", and even the average poor person's home still offers some shelter from the elements (very few people live completely exposed to nature).
Comment by aarongertler on How to Understand and Mitigate Risk (Crosspost from LessWrong) · 2019-03-13T01:11:27.760Z · score: 3 (2 votes) · EA · GW

Strong upvote! I really like it when someone summarizes a lot of core concepts within a topic in one place. That makes it easier to see how the concepts relate to one another, and gives me a single source I can return to when I'm thinking about the topic later on. I expect to return to this post in the future, and I wouldn't be surprised to find myself sharing it with friends in the middle of a conversation.

Comment by aarongertler on Getting People Excited About More EA Careers: A New Community Building Challenge · 2019-03-12T10:06:05.810Z · score: 8 (6 votes) · EA · GW

I'll cross-link to a comment I just made on the original "EA jobs" thread, arguing for a point I expect to spend a lot of time expressing in the near future: Earning-to-give is, in fact, cool and respectable and worthy of admiration, even if it doesn't happen to be the highest-impact career you can find.

I haven't heard many people try to conflate "impact" with "coolness", and I try not to do so myself. Even if your job isn't at the top of the 80,000 Hours board, that doesn't mean you aren't doing something incredible with your life, or that your efforts don't matter in the grand scheme of things.

It is true that some work saves more lives in expectation, or further boosts "the odds of a flourishing future", etc. But it's not like we spend all our time reproaching ourselves for not starting multibillion-dollar companies or becoming World Bank executives, even though those "jobs" are probably higher-impact than Open Phil jobs.

If 100 people apply for a research role, all of whom really want to help the world as much as possible, and only 10 people get that role, does that imply that we've now somehow sorted those 100 people into "coolest" and "less cool"? If someone was having a bad week and submitted a poor work test, are they "less cool" than they would have been in the counterfactual world where their brain was firing on all cylinders and they got the job?

We should be working on important problems and using money to implement promising solutions. In the distant future, when we've seen how it all played out, we'll have a good sense for whose work turned out to be "most impactful", or which donor dollars made the biggest difference. But whether we're eating algae in a world ruled by ALLFED or celebrating Aaron Hamlin's election as world president via approval voting, I hope we'll still keep in mind that every person who worked or donated, every person who did their best to help the world through evidence and reason, was a part of the grand story.

Comment by aarongertler on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-12T09:51:22.991Z · score: 10 (9 votes) · EA · GW

I think that earning-to-give and donating to AMF and GiveDirectly is very cool. (I did this full-time for a while, and now advise a private foundation whose funders also do this full-time.)

In fact, I can't think of any people I've met within EA who don't think doing this is very cool, and I can only think of a few who would clearly "rank" ETG below other types of work in terms of "coolness". The most common reaction I've heard to people who discussed their choice to pursue ETG or direct work outside of EA (for example, studying public health with an eye toward biosecurity or neglected tropical diseases) hasn't been "okay, good for you, too bad you don't work at an EA org". It's been "that's really wonderful, congratulations!"

(I do know that some people have heard something closer to the first reaction, which is disappointing -- and part of the reason I'm so forcefully expressing my beliefs here.)

Note that "coolness" is not the same as "impact"; personally, I think it's likely that working at GiveWell is probably higher-impact than earning-to-give and donating $10,000/year. But that doesn't mean that working at GiveWell is cooler. In both cases, someone is devoting their life to helping others in a way that aligns with my core values in life.

The fact that the GiveWell person passed an extra work trial (assuming they both applied to GiveWell -- maybe they didn't, and the ETG person just really likes their job!) is trivial compared to the overarching fact of "holy cow, you're both using your lives to improve other lives, it doesn't get much cooler than that".

I'd feel exactly the same way about someone whose life didn't lead them down the "fancy four-year degree" plan and who donates $1000/year because that's really all they can spare. When it comes to my internal view of "coolness", it's actually the thought that counts, as long as the thought involves carefully considering the best ways to use one's resources.

Comment by aarongertler on Getting People Excited About More EA Careers: A New Community Building Challenge · 2019-03-12T09:40:52.528Z · score: 16 (7 votes) · EA · GW

I strongly second this view. Based on my experience working at a foundation (and talking to many global health/development researchers outside the "core" of EA), and my experience meeting many people at EA Global, MIRI workshops, etc., I find that I'm especially excited to meet someone from outside who has a fresh perspective on an old topic, or helps me learn about some new corner of the high-impact research world.

(Also, they sometimes have a lot more raw experience; a magazine editor just learning about EA may know more about effective public communication than people who work in communications within EA orgs, because they've been working in that field since before EA existed in contexts where they were exposed to an audience 100x the size of what most EA orgs deal with.)

If I were to meet a fairly new UN employee at EA Global, I'd have just as many questions for them as for, say, a fairly new GiveWell researcher. The latter may work in an organization that is more tightly aligned with my values, but the former may have a sharper view of what the world looks like "up close".

Comment by aarongertler on EA Forum Prize: Winners for January 2019 · 2019-03-12T09:31:23.902Z · score: 2 (1 votes) · EA · GW

I like the idea of trying to be more granular with evaluation, though I don't like the idea of making judges do a lot more work. Right now, I'd estimate that the value of the time it takes for judges to vote + CEA to administrate the prize is more than half the cost of the prize itself.

I could see something like "divide up winnings by number of votes", since we have approval voting already, though that won't track impact very precisely (a post with one vote is probably less than 1/6 as "valuable" as a post that gets a unanimous vote from all 6 judges). I'll keep thinking about different systems, though I think the current amounts will be kept stable for at least another few months.

Comment by aarongertler on Open Thread #44 · 2019-03-12T09:26:33.207Z · score: 2 (1 votes) · EA · GW

The London talks are all available on YouTube right now, though we are still in the process of releasing complete transcripts.

Comment by aarongertler on -0.16 ROI for Weight Loss Interventions (Optimistic) · 2019-03-12T09:25:18.477Z · score: 3 (2 votes) · EA · GW

Thanks for writing this post, and for leaving a brief summary at the top!

In general, for posts like this that lay out an argument point by point, I'd strongly recommend adding section headers (highlight text in your editor and click the "T" button to create a header). This will give you a cool floating table of contents next to your post and make it easier to navigate (even for someone who reads the whole thing, top to bottom).

Comment by aarongertler on EA Hotel Fundraiser 3: Estimating the relative Expected Value of the EA Hotel (Part 1) · 2019-03-12T09:16:56.824Z · score: 6 (5 votes) · EA · GW

As I read the post, something which stood out to me was the idea of "counting productive hours". Alongside that number, it seems like we'd also have to estimate something like "productivity per hour". (I may be getting a bit lost in the math here; let me know if one of the model's current factors is meant to represent this.)

Not all productive hours are equally productive/impactful; some of the hours I've spent at CEA have been life-changing, while others were spent editing Facebook posts; all were "productive" in the sense that I was finishing necessary work, but part of my job is trying to figure out how to generate more of the most productive hours, not just "more hours of productive time". The same will be true for any project; some hours of "research" are much more valuable than others.

I suspect that some dynamics of working at an organization make having "hours of productive time" easier (more chance of a sudden inspiration from a colleague that you'd have spent much more time coming up with on your own), while others could make it harder ("overhead" or operational tasks that a lone researcher might not have to worry about).

Comment by aarongertler on CSER and FHI advice to UN High-level Panel on Digital Cooperation · 2019-03-11T23:37:37.894Z · score: 4 (3 votes) · EA · GW

Thanks for both of these answers! I'm pleasantly surprised by the strength and clarity of the positive feedback (even if some of it may result from the Cambridge name, as you speculated). I'm also surprised at the sheer number of submissions to these groups, and glad to see that CSER's material stands out.

Comment by aarongertler on What are some lists of open questions in effective altruism? · 2019-03-11T23:32:52.623Z · score: 2 (1 votes) · EA · GW

I'll add Sentience Institute's list of common disagreements in animal welfare, which Peter Hurford linked to in another thread. Thanks, Peter!

Other links added later:

Comment by aarongertler on CSER and FHI advice to UN High-level Panel on Digital Cooperation · 2019-03-09T01:32:36.500Z · score: 4 (3 votes) · EA · GW

This post and CSER's other advice post made me wonder how well one can gauge the effect of providing guidance to large governmental bodies.

For these or any past submissions, have you been able to gather evidence for how much CSER's advice mattered to an entire panel (or even just one member of a panel who took it especially seriously)?

Another question: Are any organizations providing advice to these panels that directly contradicts CSER's advice, or that seems to push in a bad or unimportant direction? It's hard to tell how much of this is "commonsense things everyone agrees on that just need more attention" vs. "controversial measures to address problems some people either don't believe in or think should be handled differently".

Comment by aarongertler on SHIC Will Suspend Outreach Operations · 2019-03-08T00:53:17.050Z · score: 26 (13 votes) · EA · GW

Strong upvote for publishing this summary. Reading it, I feel like I have a good sense of the program's timeline, logistics, and results. I also really appreciated the splitting up of metrics by "success level" and "importance" -- a lot of progress updates don't include the second of those, making them a lot less useful.

Sounds like any future project meant to teach EA values to high-school students will have to deal with the measurement problem (e.g. "high school students are busy and will often flake on non-high-school things"). Maybe some kind of small reward attached to surveys? At $10/person, that seems affordable for 380 students given the scale of the program, though it might make social desirability bias even stronger.

Comment by aarongertler on What to do with people? · 2019-03-06T23:18:41.476Z · score: 8 (4 votes) · EA · GW

Upvoted. This is a good summary of several different community structures, and you represent the strengths of hierarchy well (though I wish there'd been examples of what an individual's "journey through the hierarchy process" might look like).

I think of EA as being fairly hierarchical already. There are dozens of different organizations geared toward different task/cause combinations; if you tell me you're a person with X experience who wants to do work of type Y in country Z, there's a good chance an organization exists for that, at least for the most common causes/most EA-populated countries.

There's also a reasonably large population of people within EA who can offer suggestions if you ask "what should I do next?" I sometimes see questions like that on Facebook or in CEA surveys (though not often on the Forum, yet), and I try to advise where I can. 80,000 Hours may not have the resources to coach hundreds of additional people, but I'd hope that people in the informal EA community (at least those who are pretty familiar with the landscape) would spend time giving advice.

Perhaps some of the available resources for finding one's next move aren't well-known enough? If anyone reading this found themselves in a place where they wanted to do EA work, didn't know what to do next, and didn't find a good way to learn about their options, I'd appreciate hearing your story!

--

Regarding this quote:

I would consider it a good result if the “trail” behind the core of effective altruism movement was dotted with structures and organizations working on highly impactful problems, even if the problems are no longer on the exactly first place in current prioritization.

This seems... kind of true already? It's hard to say what "current prioritization" entails, since "global health and poverty" gets more money from the EA community and has more open jobs than any other major area, but some of the largest EA organizations are more focused on long-term work. But since most people think of LTF as the "current" priority, I'll use global poverty as an example.

There are plenty of active, even thriving organizations working on global health/poverty that have strong connections to EA. 80,000 Hours' job board lists dozens of positions in the area, and Tom Wein's collection of job boards features hundreds more (not all of those jobs are necessarily "EA-aligned", but many are, and even organizations that aren't maximally effective may offer great learning opportunities and much higher impact than an "average" job).

A job at the United Nations (there are a ton of those at the first non-80K link I clicked on from Tom Wein's list) may not have the same kind of prestige as an Open Phil job, but I still can't imagine meeting someone who works for the UN at EA Global and not having (a) a bunch of questions I'm eager to ask them, and (b) respect for their perspective on global development causes.

Jan: How might the "trail" you envision look different than what we have now? Is there some cause you're thinking of that doesn't have good organizations/structures because it is "no longer first place"? (If the argument was "there should be more orgs working on promising-but-seldom-prioritized topics like mental health", I think I'd be more in agreement.)

--

Also, this looks like a typo:

For example, part of the answers to the question “how to influence the long-term future” depend on the extent to which the world is world, or random, or predictable.

Open Thread #44

2019-03-06T09:27:58.701Z · score: 10 (4 votes)
Comment by aarongertler on Making discussions in EA groups inclusive · 2019-03-05T04:49:45.763Z · score: 4 (10 votes) · EA · GW

That's a reasonable objection. I wouldn't mind seeing even a non-nuanced response (e.g. "I think this post undervalues the utility of X compared to Y") rather than no response, but many other readers don't share my preferences and might take issue with that kind of comment (especially for this topic). And of course, mobile users are especially disadvantaged when it comes to comment-writing.

Still, if someone is a downvoter and wants to do something helpful for others in the same situation, creating one critical response that can then be upvoted (showing the relative popularity of objection X vs. objections Y, Z, etc.) seems unusually valuable.

EA Forum Prize: Winners for January 2019

2019-02-22T22:27:50.161Z · score: 30 (16 votes)

The Narrowing Circle (Gwern)

2019-02-11T23:50:45.093Z · score: 25 (12 votes)

What are some lists of open questions in effective altruism?

2019-02-05T02:23:03.345Z · score: 22 (12 votes)

Are there more papers on dung beetles than human extinction?

2019-02-05T02:09:58.568Z · score: 14 (9 votes)

You Should Write a Forum Bio

2019-02-01T03:32:29.453Z · score: 21 (15 votes)

EA Forum Prize: Winners for December 2018

2019-01-30T21:05:05.254Z · score: 46 (27 votes)

The Meetup Cookbook (Fantastic Group Resource)

2019-01-24T01:28:00.600Z · score: 15 (10 votes)

The Global Priorities of the Copenhagen Consensus

2019-01-07T19:53:01.080Z · score: 43 (26 votes)

Forum Update: New Features, Seeking New Moderators

2018-12-20T22:02:46.459Z · score: 23 (13 votes)

What's going on with the new Question feature?

2018-12-20T21:01:21.607Z · score: 10 (4 votes)

EA Forum Prize: Winners for November 2018

2018-12-14T21:33:10.236Z · score: 49 (24 votes)

Literature Review: Why Do People Give Money To Charity?

2018-11-21T04:09:30.271Z · score: 23 (10 votes)

W-Risk and the Technological Wavefront (Nell Watson)

2018-11-11T23:22:24.712Z · score: 8 (8 votes)

Welcome to the New Forum!

2018-11-08T00:06:06.209Z · score: 13 (8 votes)

What's Changing With the New Forum?

2018-11-07T23:09:57.464Z · score: 17 (11 votes)

Book Review: Enlightenment Now, by Steven Pinker

2018-10-21T23:12:43.485Z · score: 18 (11 votes)

On Becoming World-Class

2018-10-19T01:35:18.898Z · score: 17 (11 votes)

EA Concepts: Share Impressions Before Credences

2018-09-18T22:47:13.721Z · score: 9 (6 votes)

EA Concepts: Inside View, Outside View

2018-09-18T22:33:08.618Z · score: 2 (1 votes)

Talking About Effective Altruism At Parties

2017-11-16T20:22:46.114Z · score: 8 (8 votes)

Meetup : Yale Effective Altruists

2014-10-07T02:59:35.605Z · score: 0 (0 votes)