Comment by david_moss on Announcing EA Hub 2.0 · 2019-04-09T18:12:34.695Z · score: 3 (2 votes) · EA · GW

It might also be useful to ping local leads if a new person registers and indicates that they haven't attended a local group event.

Thanks! We previously had an option for people taking the EA Survey to opt-in to be informed if there was a nearby local group/put in contact with the organiser. We'll definitely consider including this in future surveys. Allowing people to indicate specifically which local group they attend also sounds potentially useful.

Comment by david_moss on Announcing EA Hub 2.0 · 2019-04-09T01:42:39.360Z · score: 7 (5 votes) · EA · GW

Thanks for asking. Yes, searching by cause area, availability for hiring, volunteering or speaking, and by specific skills are all expected to be added to the Hub within a matter of weeks.

Comment by david_moss on SHOW: A framework for shaping your talent for direct work · 2019-03-13T21:17:48.203Z · score: 19 (10 votes) · EA · GW

I'm broadly sympathetic to this view, though I think another possibility is that people want to maximise personal impact, in a particular sense, and that this leads to optimising for felt personal impact more than actually optimising for amount of overall good produced.

For example, in the context of charitable donations, people seem to strongly prefer that their donation specifically goes to impact producing things rather than overhead that 'merely' supports impact producing things and that someone else's donation goes to cover the overhead. (Gneezy et al, 2014) But, of course, in principle, these scenarios are exactly functionally equivalent.

In the direct work case, I imagine that this kind of intrinsic preference for specifically personal impact, a bias towards over-estimating the importance of impact which an individual themselves brings about and signalling/status considerations/extraneous motivations may all play a role.

Comment by david_moss on Identifying Talent without Credentialing In EA · 2019-03-13T19:06:37.586Z · score: 10 (5 votes) · EA · GW

I can't reply on behalf of Peter, but I would imagine the following:

  • Individuals at companies choose to hire for reasons other than expected performance (e.g. it's a publicly defensible decision to hire someone with recognized credentials and track record, whereas it's not publicly defensible to hire someone who lacks those but who otherwise seems like they'd perform really well). See general discussion of the signalling value of education.
  • Individuals at companies are bad at hiring for expected performance: e.g. relying on things which the evidence suggests don't predict job performance well (such as subjective impressions in an unstructured interview) and (possibly) credentials.
  • Many companies in the world can in theory hire people with 10 year track records doing similar roles in similar companies. People hiring for EA researcher roles typically can't find anyone with a 10 year track record in similar work- and even if you relax the assumptions somewhat, can still find far fewer people with any kind of track record in similar work.
  • The competencies Peter is hiring for may be more test-taskable than many that companies are hiring for. e.g. creating a cost-effectiveness model may be a better predictor of performance at creating cost-effectiveness models than the best available test tasks for "be an executive" or "manage HR."
Comment by david_moss on EA Survey 2018 Series: Donation Data · 2019-03-09T00:01:37.571Z · score: 4 (3 votes) · EA · GW

Thanks for your comment Elizabeth.

The axis was just mislabelled (one missing 0). We updated the graph to fix that.

As to the trendline, we just used a line of best fit, which assumes a linear relationship. The low R^2 (~30%) of this linear Donations~Income regression explains why it "looks a bit weird". It was used as an easy to interpret visual that depicted a simplified relationship between income and donations but one which demonstrated the correct direction of effect. This does have the disadvantage of being prone to overfitting, and as we noted "there are some large outliers driving this very strong relationship". We might expect a better fit for a nonlinear relationship, however, the later analysis with differing linear responses for different donor groups, was a reasonable fit.

Comment by david_moss on EA Survey 2018 Series: Where People First Hear About EA and Influences on Involvement · 2019-02-23T18:59:04.347Z · score: 6 (4 votes) · EA · GW

Thanks Ben!

Yes, I think next year one thing we will likely do is include some questions tracking EA knowledge (of specific concepts), which we included in the last Local Groups Survey, but have not included in an EA Survey thus far, as well as some other measures like "Have you read [such and such book]."

Something else we looked at, but which didn't really fit into the above post anywhere, was where supporters of AI/Long-Term Future causes in general first hear about EA. We basically just found that a very disproportionately large number first came from LessWrong (26% of AI supporters, compared to 12% of the sample as a whole), and not much difference anywhere else (except commensurately lower percentages from other sources). For Poverty (as you might expect) there was pretty much just the converse picture but with smaller differences.

Image can be viewed in full size if opened in a new tab or one can use the direct link here

Comment by david_moss on Open Thread: What’s The Second-Best Cause? · 2019-02-20T17:49:33.208Z · score: 23 (11 votes) · EA · GW

Interesting question!

EA Survey Cause Selection data somewhat speaks to this. One difference is that we didn't do forced ranking on the cause prioritisation scale, e.g. people could rate more than one charity as "near top priority," but we can still compare the % of people who selected each cause as "near top priority" (the second highest ranking that could be given).

Below I show what % of people selected each cause as "near top" priority for those who selected AI, Poverty or Animal Welfare as "top priority" (I could do this for the other causes on request).

As you might expect, people who rate AI as top are more inclined to rate other LTF/x-risk causes as near top priority and more people who rate Poverty as top, rate Climate Change as near top (these tended to follow similar patterns in the analyses in our main report on this). Among people who selected Animal Welfare as top, the largest number selected Poverty as near top priority.

Notably Biosecurity appears as the cause most selected as "near top" by AI advocates and the second most selected cause for those who rate Poverty top. This is in line with the results discussed in the main post where Biosecurity received the highest % of "near top" ratings of any cause (slightly higher than Global Poverty) though very low numbers of "top priority" ratings, meaning that it is only middle of the pack (5/11) in terms of "top or near top priority" ratings

Comment by david_moss on EA Survey 2018 Series: Where People First Hear About EA and Influences on Involvement · 2019-02-19T19:22:27.938Z · score: 1 (1 votes) · EA · GW

Thanks! Fixed (I hope).

Comment by david_moss on Does giving to charity make it more likely you‘ll be altruistic in future? · 2019-02-14T22:48:24.762Z · score: 1 (1 votes) · EA · GW

That seems like one example that would fall within this, yes.

Comment by david_moss on Does giving to charity make it more likely you‘ll be altruistic in future? · 2019-02-14T21:08:46.933Z · score: 5 (5 votes) · EA · GW

This (2015) review reports that:

individuals are more likely to exhibit consistency when they focus abstractly on the connection between their initial behavior and their values, whereas they are more likely to exhibit licensing when they think concretely about what they have accomplished with their initial behavior—as long as the second behavior does not blatantly threaten a cherished identity

So broadly speaking I would expect the act(s) in question making public (or just making privately salient to you) a particular moral identity (as a person who acts well) would increase moral consistency effects, whereas acts which emphasis the amount of good you have done would increase licensing effects.

Comment by david_moss on The Narrowing Circle (Gwern) · 2019-02-12T21:37:20.126Z · score: 15 (8 votes) · EA · GW

Much discussion of Moral Circle Expansion seems hampered by lack of conceptual clarity about what the Moral Circle means.

There are a lot of distinctions that need to be drawn, but here are two positions on one dimension:

  1. The moral circle merely refers to which (groups or types of) entities are viewed as possible targets of moral regard
  2. The moral circle refers to the amount of actual moral concern granted to such entities

A lot more distinctions should be drawn on this dimension alone (e.g. for "actual moral concern" are we interested in abstract attitudes of concern, actual amount of effort extended, or actual treatment extended), but even these suffice for now.

On the first view, which seems somewhat closer to original uses of the term, it does seem like retrenchment of the Moral Circle should be expected to be quite rare, at least once you reach contexts like our own (in WEIRD societies) where there are extremely prevalent memes about at least potentially considering entities as possible moral targets if they might be persons in any sense (or more generally in contexts where the conditions for considering the possibility of including some group in the moral circle are as extensive and plural as they are now). It seems relatively hard for groups to fall entirely out of the moral circle in the first sense, in such cases, except in cases like those you mention where we decide that certain entities don't exist or aren't sentient.

With the more expansive second sense of Moral Circle (which seems to be what people are using), where all that is required for Moral Circle expansion/retraction is an increase or reduction in moral concern extended (as seems to be implied by examples such as more/less care being granted to the elderly and so on), it seems like the Moral Circle should be expected to be expanding and retracting near constantly on an individual or group basis. This is especially so if we understand degree of moral concern as meaning the actual extend to which needs are weighted and help extended (in which case this will, almost necessarily, be pervaded by tradeoffs in a near zero sum fashion) which is why further distinction being drawn within this category is so important.

EA Survey 2018 Series: Where People First Hear About EA and Influences on Involvement

2019-02-11T06:05:05.829Z · score: 28 (16 votes)

EA Survey 2018 Series: Group Membership

2019-02-11T06:04:29.333Z · score: 34 (13 votes)
Comment by david_moss on Disentangling arguments for the importance of AI safety · 2019-01-23T17:34:59.672Z · score: 3 (3 votes) · EA · GW

And this proliferation of arguments is (weak) evidence against their quality: if the conclusions of a field remain the same but the reasons given for holding those conclusions change, that’s a warning sign for motivated cognition (especially when those beliefs are considered socially important).

I'm not sure these considerations should be too concerning in this case for a couple of reasons.

I agree that it's concerning where "conclusions... remain the same but the reasons given for holding those conclusions change" in cases where people originally (putatively) believe p because of x, then x is shown to be a weak consideration and so they switch to citing y as a reason to believe y. But from your post it doesn't seem like that's necessarily what has happened, rather than a conclusion being overdetermined by multiple lines of evidence. Of course, particular people in the field may have switched between some of these reasons, having decided that some of them are not so compelling, but in the case of many of the reasons cited above, the differences between the positions seem sufficiently subtle that we should expect cases of people clarifying their own understanding by shifting to closely related positions(e.g. it seems plausible someone might reasonably switch from thinking that the main problem is knowing how to precisely describe what we value to thinking that the main problem is not knowing how to make an agent try to do that).

It also seems like a proliferation of arguments in favour of a position is not too concerning where there are plausible reasons why should expect multiple of the considerations to apply simultaneously. For example, you might think that any kind of powerful agent typically presents a threat in multiple different ways, in which case it wouldn't be suspicious if people cited multiple distinct considerations as to why they were important.

Comment by david_moss on EA Survey 2018 Series: Cause Selections · 2019-01-19T18:07:58.311Z · score: 7 (3 votes) · EA · GW

I think you can get a very rough sense of possible changes by comparing the results from different years (as in the first two graphs in the post), but given the difficulties in interpreting these differences I would be wary of presenting these as % changes. Aside from possible differences in the sample across different years, changing categories for causes would also obviously distort things (we start with a fairly strong presumption against changing categories for this reason, but in some cases, the development of Mental Health as a field being one, it's unavoidable).

Comment by david_moss on EA Survey 2018 Series: Cause Selections · 2019-01-19T16:52:01.075Z · score: 4 (3 votes) · EA · GW

Yeh, I certainly think this would be valuable, although it would need to be weighed against the fact that we already have more than 10 causes listed, which may be pushing it. We may be able to accommodate this by splitting out the questions into questions about broader cause areas and then about more specific causes.

EA Survey 2018 Series: Cause Selections

2019-01-18T16:55:31.074Z · score: 65 (25 votes)
Comment by david_moss on EA Survey 2018 Series: Donation Data · 2018-12-28T18:48:23.475Z · score: 1 (1 votes) · EA · GW

Thanks for the suggestion! That seems likely to be at least one of the things that is being picked up by the 'financial constraint' responses.

Comment by david_moss on EA Survey 2018 Series: Donation Data · 2018-12-10T23:34:58.338Z · score: 3 (3 votes) · EA · GW


  1. Were income numbers pre or post-tax?

All pre-tax.

  1. Do you have a number for average earnings of non-students who are earning to give?$52,000 is a pretty low number for that category.

The numbers are likely lowered (as they were elsewhere) by a lot of fairly new, lower earning/donating people, who are just starting out on that career path. Median donations for (non-student) E2G were $3000 and $70,000 income. Only above the 63rd percentile in this category were people earning more than $100,000.

  1. How did the survey define the difference between "earning to give" and "other", if at all?

These were just fixed response options without additional definition.

Comment by david_moss on EA Survey 2018 Series: Donation Data · 2018-12-10T19:37:54.650Z · score: 1 (1 votes) · EA · GW

Thanks Greg. These were selected a priori (though informed by our prior analyses of the data).

Due to missing data there was some difficulty doing stepwise elimination with the complete dataset. We've added a model including all interactions to the regression document. This had a slightly better AIC (3093 vs 3114).

Comment by david_moss on EA Survey 2018 Series: Donation Data · 2018-12-10T17:57:59.130Z · score: 1 (1 votes) · EA · GW

The people who selected 'research' were disproportionately students compared to the other categories. Excluding all students across categories, 251 people selected research, and median income and donations were still significantly lower.

Comment by david_moss on EA Survey 2018 Series: Donation Data · 2018-12-10T17:48:34.622Z · score: 2 (2 votes) · EA · GW

Thanks. Updated.

EA Survey 2018 Series: Donation Data

2018-12-09T03:58:43.529Z · score: 81 (36 votes)
Comment by david_moss on EA Survey Series 2018 : How do people get involved in EA? · 2018-12-02T00:23:56.419Z · score: 3 (2 votes) · EA · GW

Thanks Ben. Yeh, this is 3 people in 2009 and 3 people in 2010 (out of 2473 responses to these questions overall). There are a handful of similar errors for Doing Good Better. Every year, there are a few people who seem to get the years wrong in this way (alongside a lot of responses saying explicitly that they don't remember).

Anecdotally, (both in the survey and elsewhere) I find a surprising number of people confuse CEA, 80K and GWWC (not to mention, Rethink Charity, its various projects and Charity Science).

Comment by david_moss on From humans in Canada to battery caged chickens in the United States, which animals have the hardest lives: results · 2018-11-30T01:32:56.863Z · score: 3 (3 votes) · EA · GW

Thanks Ben! Corrected: we certainly agree that there are many more bugs than fish factory farmed fish.

Comment by david_moss on EA Survey Series 2018: Subscribers and Identifiers · 2018-11-26T22:36:32.387Z · score: 7 (4 votes) · EA · GW

Agreed. A per my reply to you here we're still going to talk about the influence of different levels of involvement with regards to cause selection and in a post addressing your question about levels of involvement and different routes by which people get involved in EA.

Comment by david_moss on EA Survey Series 2018: Subscribers and Identifiers · 2018-11-26T22:25:07.706Z · score: 3 (2 votes) · EA · GW

Thanks Ben. I totally agree and we're going to go into this a lot more in the Cause Preference post.

Comment by david_moss on EA Survey Series 2018 : How do people get involved in EA? · 2018-11-23T16:36:53.625Z · score: 2 (2 votes) · EA · GW

Thanks Ben, this seems like a great suggestion. We're going to be talking about different levels of engagement a lot in the subsequent series of the post (with regards to cause preference in particular), but will make sure to put together an analysis on this specifically.

Comment by david_moss on Is EA Growing? Some EA Growth Metrics for 2017 · 2018-11-21T19:47:56.872Z · score: 3 (2 votes) · EA · GW

the sum of local EA group facebook numbers might serve as a good proxy for the size of the EA movement per se

This is an interesting proposition, but one thing that will limit its usefulness, I think, is that lots of EAs are members of multiple local group Facebook groups, presumably either to show support or out of interest in their content. Aside from that, many members of the online groups appear to be not engaged in the local EA community (and perhaps not really engaged in EA at all): for example, EA London has around 2000 members of the Facebook group but many fewer people who are actively engaged and attend events and so.

It could also be combined with other metrics e.g. with impact (e.g. GiveWell donations, 80k Hours career changes) to assess communications effectiveness. with time (of the local EA team) and cost invested to achieve to assess operational effectiveness. If compared across local groups, the metrics would highlight local success stories and potentially where groups might need help.

The Local Groups Survey did this to some extent: measuring (self-reported) number of group members, Pledges, Career changes, funds raised or donations influenced, among other things. We don't publicly release a breakdown of particular groups, of course, but we did look at the correlations between different variables and performance on different metrics. As you'd expect there was a fairly good correlation between success on different metrics, though with plenty of exceptions. I agree it would be valuable to have more systematic investigation of group performance of this kind to identify trends and where things seem to be working particularly well or not well.

Comment by david_moss on EA Survey Series 2018 : How do people get involved in EA? · 2018-11-20T21:50:08.517Z · score: 6 (3 votes) · EA · GW

Hi Ben,

Thanks for the comment!

We will definitely keep refining the categories year-by-year depending on which seem more or less significant.

“The Sam Harris and Joe Rogan podcasts were done by Will as part of the promotion campaign for DGB while he was working at CEA/80k so could arguably be coded as DGB/CEA/80k”

I entirely agree that Will’s podcasts can be counted as CEA/80K/DGB/Will in terms of assigning credit.

I’m not sure what that tells us about how we ought to design the survey though. In terms of coding open comment responses: as I noted, how to classify people’s open comment answers is somewhat fuzzy, many comments ambiguously fit multiple different options and those numbers can’t be directly compared to the fixed category responses. Your podcast case is a perfect example since, as you note, if someone writes in “Joe Rogan Podcast” it could be classified as ‘DGB’, ‘CEA’, ‘80K’, ‘Podcast’ or ‘Will’ (or depending on our interests, we could code it as ‘mass media outreach’ or ‘online’ and so on). I could go back through the open comments and try to code things as CEA or 80K related (per your specification), but it will be a pretty vague and heterogeneous category with a lot of edge cases like these, that will make it hard to interpret. Explicit references to existing EA orgs or Will etc. were coded as such.

Also, crucially, a lot of the comments weren’t specific enough to code in that way. In the specific case of Sam Harris podcasts, for example, at least one person mentioned the Peter Singer episode (no-one explicitly mentioned Eliezer’s AI episode that I saw, but in principle some people might have first heard through that one), which means technically we can’t code every reference to Sam Harris’ podcast as Will’s- though I think it’s entirely reasonably of you to assume that many of the non-specific comments were referring to Will’s. In terms of refining the categories: note that DGB and 80K were already available as options and respondents chose to select “Other” and write in a response anyway. It would be difficult to change the fixed categories, in such a way that people who would otherwise write “Joe Rogan podcast” would instead select “CEA”/“80K” etc. as a category. For one thing, people need to know (and remember) that, when they hear Will on a podcast, this should be counted as 80K or CEA or as part of the marketing for DGB or whichever category. They may also just feel that ‘Other: Podcast’ better captures their case than DGB or 80K. I think it would be difficult to specify all the things that could reasonably be counted as CEA’s work in a given fixed option (e.g. ‘80,000 Hours, inc. Doing Good Better, 80K Podcasts, Will MacAskill podcasts etc.’), but we’ll continue reviewing the results and trying to think of the best options.

I agree that in light of this year’s results, including Podcast as its own category next year may well make sense. We could also split out Book/Article etc. into multiple options. Though doing this might actually give us less information, in terms of attributing responses to CEA/80K/Will etc., as if people just select ‘Podcast’ as a fixed response, rather than writing in an ‘Other’ response, we won’t know which Podcast it was. We could include/require an open comment to specify further in addition to fixed responses, but requiring open comment would be significantly more onerous for respondents and might reduce response rate, so it’s a tricky balance. Alternatively we could include more fixed options (Podcast: Will; Podcast: 80K/Rob Wiblin; Podcast: Other, and so on), and ditto for other specific books, websites and so on (so far, Doing Good Better is the only one we’ve extended this treatment to), but of course that might make the question too unwieldy.

“Presumably some of the books / articles / talks are also other materials produced by the organisations or press coverage they sought out - does that seem right”

I definitely agree this is right in terms of assigning credit: no doubt EA orgs such as CEA should claim a lot of credit for vicariously bringing about lots of other outcomes (e.g. lots of Personal Contacts are presumably influenced by the prior work of EA orgs), but again, I don’t think there’s a way to capture that in people’s First Heard responses. And again, when open responses explicitly referred to an EA org, then it was coded accordingly, as well as coding it as ‘Book’ or ‘Article.’.

“Likewise, maybe 'search' and 'facebook' should be removed as categories, because they're channels you use to find the other content listed. Presumably everyone who found out about EA through 'facebook' likely saw a post by a friend, so should be a personal referral, or saw a post by one of the orgs, so should be coded as an org.”

I think it’s pretty plausible that they should be removed next year (given the relatively small numbers attached to them). Although it doesn’t necessarily follow that the answers could neatly be split off into “personal contact” or “an EA org” because people may remember that they saw something on Facebook, but not which org was responsible (a lot of comments were pretty vague like this): so getting rid of “search” and “facebook” would probably mean a lot more “Other” responses.

“I'm also surprised to see isn't listed”

We’d certainly be open to including this website if you are particularly interested in it (although then there’s a question of which other specific EA websites should or shouldn’t be included ). For what it’s worth we didn’t receive a single “Other” response referring to it, that I saw, but maybe people classified this as “Search” or something else.

EA Survey Series 2018 : How do people get involved in EA?

2018-11-18T00:06:12.136Z · score: 48 (27 votes)
Comment by david_moss on Cross-post: Think twice before talking about ‘talent gaps’ – clarifying nine misconceptions, by 80,000 Hours. · 2018-11-14T19:17:58.866Z · score: 6 (4 votes) · EA · GW
What’s more, the costs of raising salaries to attract new staff are often large because in order to be fair, you may also need to raise salaries for existing staff. For instance, if you have 10 equally-paid staff, and raise salaries 10% to attract one extra person, that final staff member effectively costs double the average previous salary.

This also seems to have an impact on other orgs. I have lost count of the number of times I have heard people refer to "starting salaries at [large EA org with a >$1,000,000 annual budget]" as a baseline for salary expectations. This clearly has a disproportionately negative effect on smaller EA orgs or those trying to run more cheaply.

Comment by david_moss on On 'causes' · 2014-06-25T19:08:00.000Z · score: 0 (0 votes) · EA · GW

The question of whether "sub-goal" x is the "simplest" or best "proxy" for our more ultimate goals doesn't seem particularly useful and can be highly misleading, as in the example you chose. You conclude that promoting animal welfare is very probably not the best cause (because promoting empathy probably dominates it as a proxy), whereas we can't say the same for promoting human welfare. But it could still be the case that promoting animal welfare is a better proxy than human welfare for far future flourishing, even though there's a yet better intermediary in the case of animal welfare. The problem is that multiple descriptions of causes can be described, and we can generate multiple conflicting but practically uninformative statements about proxies and causes.