EA Survey 2019 Series: How EAs Get Involved in EA 2020-05-21T16:28:12.079Z · score: 103 (32 votes)
Empathy and compassion toward other species decrease with evolutionary divergence time 2020-02-21T15:31:26.309Z · score: 37 (17 votes)
EA Survey 2018 Series: Where People First Hear About EA and Influences on Involvement 2019-02-11T06:05:05.829Z · score: 34 (17 votes)
EA Survey 2018 Series: Group Membership 2019-02-11T06:04:29.333Z · score: 34 (13 votes)
EA Survey 2018 Series: Cause Selection 2019-01-18T16:55:31.074Z · score: 69 (29 votes)
EA Survey 2018 Series: Donation Data 2018-12-09T03:58:43.529Z · score: 82 (37 votes)
EA Survey Series 2018 : How do people get involved in EA? 2018-11-18T00:06:12.136Z · score: 50 (28 votes)
What Activities Do Local Groups Run 2018-09-05T02:27:25.247Z · score: 23 (18 votes)


Comment by david_moss on EA Survey 2019 Series: How EAs Get Involved in EA · 2020-05-28T18:16:23.171Z · score: 6 (3 votes) · EA · GW

I just added him a mention of this to the bullet point about these open comments.

Comment by david_moss on Some thoughts on deference and inside-view models · 2020-05-28T10:38:56.198Z · score: 11 (7 votes) · EA · GW

Most of us had a default attitude of skepticism and uncertainty towards what EA orgs thought about things. When I talk to EA student group members now, I don’t think I get the sense that people are as skeptical or independent-thinking.

I've heard this impression from several people, but it's unclear to me whether EAs have become more deferential, although it is my impression that many EAs are currently highly deferential. It seems quite plausible to me that it is merely more apparent that EAs are highly deferential right now, because the 'official EA consensus' (i.e. longtermism) is more readily apparent. I think this largely explains the dynamics highlighted in this post and in the comments. (Another possibility is simply that newer EAs are more likely to defer than veteran EAs and as EA is still growing rapidly, we constantly get higher %s of non-veteran EAs, who are more likely to defer. I actually think the real picture is a bit more complicated than this, partly because I think moderately engaged and invested EAs are more likely to defer than the newest EAs, but we don't need to get into that here).

My impression is that EA culture and other features of the EA community implicitly encourage deference very heavily (despite the fact that many senior EAs would, in the abstract, like more independent thinking from EAs). In terms of social approval and respect, as well as access to EA resources (like jobs or grants), deference to expert EA opinion (both in the sense of sharing the same views and in the sense of directly showing that you defer to senior EA experts) seem pretty essential.

I have the sense that people would now view it as bad behavior to tell people that you think they’re making a terrible choice to donate to AMF

Relatedly, my purely anecdotal impression is basically the opposite here. As EA has professionalised I think there are more explicit norms about "niceness", but I think it's never been clearer or more acceptable to communicate implicitly or explicitly, that you think that people who support AMF (or other near-termist) probably just 'don't get' longtermism and aren't worth engaging with.

Comment by david_moss on EA Survey 2019 Series: How EAs Get Involved in EA · 2020-05-27T18:36:59.190Z · score: 6 (3 votes) · EA · GW

Thanks Jon.

I agree Peter Singer is definitely still one of the most important factors, as our data shows (and as we highlighted last year. He's just not included in the bullet point in the summary you point to because that only refers to the fixed categories in the 'where did you first hear about EA?' question.

In 2018 I wrote "Peter Singer is sufficiently influential that he should probably be his own category", but although I think he deserves to be his own category in some sense, it wouldn't actually make sense to have a dedicated Peter Singer category alongside the others. Peter Singer usually coincides with other categories i.e. people have read one of his books, or seen one of his TED Talks, or heard about him through some other Book/Article or Blog or through their Education or a podcast or The Life You Can Save (org) etc., so if we split Peter Singer out into his dedicated category we'd have to have a lot of categories like 'Book (except Peter Singer)' (and potentially so for any other individuals who might be significant) which would be a bit clumsy and definitely lead to confusion. It seems neater to just have the fixed categories we have and then have people write in the specifics in the open comment section and, in general, not to have any named individuals as fixed categories.

The other general issue to note is that we can't compare the %s of responses to the fixed categories to the %s for the open comment mentions. People are almost certainly less likely to write in something as a factor in the open comment than they would be to select it were it offered as a fixed choice, but on the other hand, things can appear in the open comments across multiple categories, so there's really no way to compare numbers fairly. That said, we can certainly say that since he's mentioned >200 times, the lower bound on the number of people who first heard of EA from Peter Singer is very high.

Comment by david_moss on EA Survey 2019 Series: How EAs Get Involved in EA · 2020-05-23T09:31:48.250Z · score: 2 (1 votes) · EA · GW

Thanks. That makes sense. I try not to change the historic categories too much though, since it messes up comparisons across years.

Comment by david_moss on EA Survey 2019 Series: How EAs Get Involved in EA · 2020-05-22T21:30:02.842Z · score: 9 (5 votes) · EA · GW

I think it's fair to say (as I did) that LessWrong is often thought of as "primarily" online, and, given that, I think it's understandable to find it surprising that these are the second most commonly mentioned way people hear about EA within the LessWrong category (I would expect more comments mentioning SlateStarCodex and other rationalist blogs for example). I didn't say that "surprising that people mention LessWrong meetups" tout court. I would expect many people, even among those who are familiar with LessWrong meetups, to be surprised at how often they were mentioned, though I could be mistaken about that.

(That said, a banal explanation might be that those who heard about EA just straightforwardly through the LessWrong forum, without any further detail, were less likely to write anything codable in the open comment box, compared to those who were specifically influenced by an event or HPMOR)

Comment by david_moss on EA Survey 2019 Series: How EAs Get Involved in EA · 2020-05-22T18:36:42.518Z · score: 23 (7 votes) · EA · GW

Thanks Jonas!

You can see the total EAs (estimated from year first heard) and the annual growth rate here:

As you suggest, this will likely over-estimate growth due to greater numbers of EAs from earlier cohorts having dropped out.

Comment by david_moss on Applying speciesism to wild-animal suffering · 2020-05-18T12:45:40.501Z · score: 3 (2 votes) · EA · GW

I occasionally see people make this kind of argument in the case of children, based on similar arguments for autonomy (see youth rights), though I agree that more people seem to find the argument that we should intervene convincing in the case of young children (that said, from the perspective of the activist who holds this view, this just seems like inappropriate discrimination).

Comment by david_moss on Applying speciesism to wild-animal suffering · 2020-05-18T08:52:22.923Z · score: 4 (3 votes) · EA · GW

It seems worth noting that some people also make the argument that it is x-ist to "think we have the right to intervene in the lives of" x oppressed group. As such, they probably won't be convinced by the analogy (though I agree that some people do think that we should intervene in human cases relevantly similar to wild animal suffering cases and so will be convinced).

Comment by david_moss on 2019 Ethnic Diversity Community Survey · 2020-05-13T13:34:23.918Z · score: 11 (4 votes) · EA · GW

Thanks Jonas! We'll be discussing this in more detail in our forthcoming post on EA Engagement levels.

Comment by david_moss on 2019 Ethnic Diversity Community Survey · 2020-05-12T16:04:25.920Z · score: 1 (1 votes) · EA · GW

Thanks Vaidehi. I agree that this is still useful information, I was simply responding to your direct comparison to the EA Survey ("The survey seems to have achieved this goal [solicit the experiences of people from ethnic minorities in EA] compared to the annual EA survey, a much higher proportion of respondents to this survey were non-white.").

By the way if you have specific questions that you would like us to include in the EA Survey please let us know (though no hurry).

Comment by david_moss on New data suggests the ‘leaders’’ priorities represent the core of the community · 2020-05-12T11:47:36.631Z · score: 43 (14 votes) · EA · GW

Fortunately we have data on this (including data on different engagement levels using EA Forum as a proxy) going back to 2017 (before that the cause question had a multi-select format that doesn't allow for easy comparison to these results).

If we look at the full sample over time using the same categories, we can see that there's been a tendency towards increased support for long-termist causes overall and a decline in support for Global Poverty (though support for Poverty remains >50% higher than for AI. The "Other near term" trend goes in the opposite direction, but this is largely because this category combines Climate Change and Mental Health and we only added Mental Health to the EAS in 2018.

Looking at EA Forum members only (a highly engaged ~20% of the EAS sample), we can see that there's been a slight trend towards more long-termism over time, though this trend is not so immediately obvious to see since between 2018 and 2019 EAs in this sample seem to have switched between AI and other long-termist causes. But on the whole the EA Forum subset has been more stable in its views (and closer to the LF allocation) over time.

Of course, it is not immediately obvious what we should conclude from this about dropout (or decreasing engagement) in non-longtermist people. We do know that many people have been switching into long-termist causes (and especially AI) over time (see below). But it's quite possible that non-longtermists have been dropping out of EA over a longer time frame (pre-2017). That said, I do think that the EA Forum proxy for engagement is probably more robust to these kinds of effects than the self-reported (1-5) engagement level, since although people might dropout of Forum membership due to disproportionately longtermist discussion, the Forum still has at least a measure of cause diversity, whereas facets of the engagement scale (such as EA org employement and EA Global attendance) are more directly filtering on long-termism. We will address data about people decreasing engagement or dropping our of EA due to perceiving EA as prioritizing certain causes too heavily in a forthcoming EA Survey post.

Both images from the EA Survey 2019: Cause Prioritization

Comment by david_moss on 2019 Ethnic Diversity Community Survey · 2020-05-12T08:53:40.517Z · score: 34 (9 votes) · EA · GW

Thanks for the post!

Just a comment on the reference to the EA Survey numbers. As I discussed here, because the EA Survey's question about race/ethnicity was multi-select, the percentages of respondents selecting each category can't be straightforwardly converted into percentages "identif[ying] with non-white race or ethnicity." We used multi-select to allow people to indicate complex plural identities, without forcing people to select more fixed categories, but it doesn't allow for particularly simple bench-marking if you want a binary white/non-white distinction. In the next survey we'll consider adding a further question with more fixed options. It's more accurate to describe our data as showing that 13.1% of respondents did not indicate white identity at all, 80.5% exclusively selected white and a further 6.4% selected both white and other identities. Unfortunately, interpreting this last category in terms of an interest in a white/non-white binary is fraught, since it's unclear whether these individuals would identify as "mixed race", white, non-white or a "person of colour." Of note, despite Asian being the most common identity other than white selected for this question, the most common selection within this 'mixed' category, was White and Hispanic (and the relationship between Hispanic identity and ethnicity/race is not straightforward.

As such, in a more expensive sense, the total "non-white" percentage may be higher, up to around 20%.

Regarding the broader claim that: "The goal of the survey was to solicit the experiences of people from ethnic minorities in EA. The survey seems to have achieved this goal compared to the annual EA survey, a much higher proportion of respondents to this survey were non-white."

I agree the percentages of non-white/white respondents is a bit higher in the dedicated "Ethnic Diversity" survey, but you had around 10x fewer ethnically diverse respondents expressing views overall, so this is not a clear win. The percentage difference could be explained entirely by white EAs thinking "This survey isn't really for me." A survey specifically about ethnic diversity seems particularly likely to skew towards respondents (both white and non-white) with a particular interest in the topic too, which is probably of particular significance when we're dealing with only around 30-36 respondents. That said, I agree this is an important source of more qualitative data than we could gather with the EA Survey!

Comment by david_moss on Racial Demographics at Longtermist Organizations · 2020-05-01T18:23:27.545Z · score: 23 (10 votes) · EA · GW

I calculated the percentage of POC by adding all the responses other than white, rather than taking 1 - % of white respondents... Thinking more about this in response to your question, it’d probably be more accurate to adjust my number by dividing by the sum of total responses (107%).

Yeh, as you note, this won't work given multiple responses across more than 2 categories.

I can confirm that if you look at the raw data, our sample was 13.1% non-mixed non-white, 6.4% mixed , 80.5% non-mixed white. That said, it seems somewhat risky to compare this to numbers "based on the pictures displayed on the relevant team page", since it seems like this will inevitably under-count mixed race people who appear white.

Comment by david_moss on Racial Demographics at Longtermist Organizations · 2020-05-01T17:36:25.843Z · score: 21 (11 votes) · EA · GW

Thanks for your post!

Unless I am missing something about your numbers, I think the figures you have from the EA Survey might be incorrect. The 2019 EA Survey was 13% non-white (which is within 0.1% of the figure you find for longtermist orgs).

It seems possible that, although you've linked to the 2019 survey, you were looking at the figures for the 2018 Survey. On the face of it, this looks like EA is 78% white (and so, you might think, 22% non-white), but those figures don't account for people who answered the question but who declined to specify a race. Once that is accounted for the non-white figures are roughly 13% for 2018 as well.

Comment by david_moss on Why I'm Not Vegan · 2020-04-10T18:08:50.049Z · score: 23 (11 votes) · EA · GW

We actually have some survey data on how the broader non-EA population thinks about moral tradeoffs between humans and non-human animals.

SlateStarCodex reported the results of a survey we (at Rethink Priorities) ran attempting to replicate a survey he and another commenter ran, asking people what number of animals of different species are of the same moral value as one adult human i.e. higher numbers means animals have lower value. Our writeup, which goes into a lot more detail about the interpretation and limitations of this data is forthcoming.

If you look at the column for 'Rethink Priorities (inclusive)' (which I think is the most relevant), you'll see the median values given were:

  • Pigs: 75
  • Chickens: 1000
  • Cows: 75

Your numbers mostly ascribe lower value to non-human animals than the median in our sample (an online US-only sample from Prolific). Of course, the question we asked was for a pure comparison of moral value, not adjusted for how how bad the conditions are that each species face in factory farms. But I would have thought that this should mean that the answers given to your question would be lower rather than higher. It would be interesting to know roughly what your pure moral value tradeoffs would be, if you have them.

Comment by david_moss on Why not give 90%? · 2020-03-26T13:25:00.882Z · score: 4 (4 votes) · EA · GW

There is detailed discussion of some closely related issues in this book chapter in the Effective Altruism: Philosophical Issues book edited by Hilary Graves and Theron Pummer. The author discusses these in less detail in this post on PEA Soup.

I also ran a small survey to test effective altruists' views on the thought experiments discusses. I haven't gotten around to writing it up, due to more pressing tasks. I could also share the survey again here, if people are particularly interested.

Comment by david_moss on Poll - what research questions do you want me to investigate while I'm in Africa? · 2020-03-03T10:53:36.780Z · score: 3 (4 votes) · EA · GW

I don't think this response makes much sense. Many of the questions listed are of very niche EA interest. For example, the number of researchers in the whole world looking at Wild Animal Suffering (through an EA lens) is surely in the 10s. The number of these who are specifically on the ground in Rwanda making notes on the experiences of wildlife, it should go without saying, close to zero. Of course, there are many zoologists in the world, but as EA WAW researchers often find, it is hard to apply much of this to research that is interested in welfare specifically.

The same goes for site visits to factory farms. First hand information about actual conditions on factory farms is notoriously hard to come by and many EA discussions have noted that we lack information about conditions as they may vary across other parts of the world. It would be very surprising if there were a plethora of animal welfare first hand case studies of conditions in farms across different parts of Africa that we haven't noticed before.

Most of the rest of the questions just seem to involve speaking to locals about their perspectives while in different parts of Africa. While I agree that there is, of course, already qualitative research somewhat related to many of these questions, it's hard to see the rationale for not speaking to Africans about their perspectives and only reading qualitative reports second hand from the developed world.

Comment by david_moss on How much will local/university groups benefit from targeted EA content creation? · 2020-02-22T12:22:19.886Z · score: 1 (1 votes) · EA · GW

There was no way to ask whether people knew about all the resources that currently existed (although in the next survey we could ask whether they know about the EA Hub's resources specifically). We do know from other questions in this survey and in 2017's that many group leaders are not aware of existing services in general though.

Comment by david_moss on How much will local/university groups benefit from targeted EA content creation? · 2020-02-20T09:04:34.532Z · score: 7 (5 votes) · EA · GW

The 2019 Local Group Organizers Survey found large percentages of organizers reporting that more "written resources on how to run to run a group" and "written resources on EA thinking and concepts" would be highly useful.

Comment by david_moss on Thoughts on electoral reform · 2020-02-18T19:53:52.862Z · score: 19 (11 votes) · EA · GW

It's great to see more reflection about approval voting and possible alternatives. I think the EA community should probably favour a lot more research into these alternatives before it invests resources in promoting any of these options.

Excessive political polarisation, especially party polarisation in the US, makes it harder to reach consensus or a fair compromise, and undermines trust in public institutions. Efforts to avoid harmful long-term dynamics, and to strengthen democratic governance, are therefore of interest to effective altruists.

I will note that many political theorists think that reducing polarisation and increasing consensus should not be our goals in democracy and need not be positive things e.g. agonistic theorists. This is especially so when, increasing consensus and compromise solutions is identified with "moderate" or centrist (which, as you note, could be construed as a bias).

Comment by david_moss on How do you feel about the main EA facebook group? · 2020-02-13T10:11:16.073Z · score: 25 (12 votes) · EA · GW

I agree that the main EA Facebook group has many low quality comments which "do not meet the bar for intellectual quality or epistemic standards that we should have EA associated with." That said, it seems that one of the main reasons for this is that the Facebook group contains many more people with very low or tangential involvement with EA. I think we should be pretty cautious about more heavily moderating or trying to exclude the contributions of newer or less involved members

As an illustration: the 2018 EA Survey found >50% of respondents were members of the Facebook group, but only 20% (i.e. 1 in 5) were members of the Forum. Clearly the Facebook group has many more users who are even less engaged with EA, who don't take the EA Survey. The forthcoming 2019 results were fairly similar.

At the moment I think the EA Facebook group plays a fairly important role alongside the EA Forum (which only a small minority of EAs are involved with) in giving people newer to the community somewhere where they can express their views. Higher moderation of comments would probably add to the pervasive (we will discuss this in a future EA Survey post) sense that EA is exclusive and elitist.

I do think it's worth considering whether low quality discussion on the EA Facebook group will cause promising prospective EAs to 'bounce' i.e. see the low quality discussion, infer that EA is low quality and leave. The extent to which this happens is a tricky broader question, but I'm inclined to hope that it wouldn't be too frequent since readers can easily see the higher quality articles and numerous Forum posts linked on Facebook and I would also hope that most readers will know that online discussion on Facebook is often low quality and not update too heavily against EA on the basis of it.

It also seems worth bearing in mind that since most members of the Facebook group clearly don't make the decision to move over to participating in the EA Forum, that efforts to make the EA Facebook discussion more like the Forum, may just put off a large number of users.

Comment by david_moss on Is vegetarianism/veganism growing more partisan over time? · 2020-01-24T17:12:04.639Z · score: 5 (4 votes) · EA · GW

I think this is a good explanation of at least part of the phenomenon. As you note, where we do samples of the general population and only 5% of people report being vegetarian or vegan, then even a small number of lizardperson answering randomly, oddly or deliberately trolling could make up a large part of the 5%.

That said, I note that even in surveys which are deliberately solely targeting identified vegetarians or vegans (so 100% of people in the sample identified as vegetarian or vegan), large percentages then say that they eat some meat. Rethink Priorities has an unpublished survey (report forthcoming soon) which sampled exclusively people who have previously identified as vegetarian or vegan (and then asked them again in the survey whether they identified as vegetarian or vegan) and we found just over 25% of those who answered affirmatively to the latter question still seemed to indicate that they consumed some meat product in a food frequency questionnaire. So that suggests to me that there's likely something more systematic going on, where some reasonably large percentage of people identify as vegetarian or vegan despite eating meat (e.g. because they eat meat very infrequently and think that's close enough). Of course, it's also possible that the first sampling to find self-identified vegetarian or vegans sampled a lot of lizardpersons, meaning that there was a disproportionate number of lizardpersons in the second sampling, meaning that there was a disproportionate number of lizardpersons who then identified as vegetarian or vegan in our survey. And perhaps lizardpersons don't just answer randomly but are disproportionately likely to identify as vegetarian or vegan when asked, which might also contribute.

Comment by david_moss on EA Survey 2019 Series: Geographic Distribution of EAs · 2020-01-23T17:12:59.462Z · score: 13 (4 votes) · EA · GW

I don't think that really explains the observed pattern that well.

I agree that in general, people not appearing in the EA Survey could be explained either by them dropping out of EA or them just not taking the EA Survey. But in this case, what we want to explain is the appearance of a disproportionate number of people who took the EA Survey in 2018, not taking the EA Survey in 2019, among the most recent cohorts of EAs who took the EA Survey in 2018 (2015-2017) compared to earlier cohorts (who have been in EA longer).

The explanation that this is due to EAs disproportionately drop out during their first 3 years seems to make straightforward intuitive sense.

The explanation the people who took the EA Survey in 2018 and joined within 2015-2017 specifically, were disproportionately less likely to take the EA Survey in 2019 seems less straightforward. Presumably the thought is that these people might have taken the EA Survey once, realised it was too long or something, and decided to not take it in 2019, whereas people who joined in earlier years have already taken the EA Survey and so are less likely to drop out of taking it, if they haven't already done so? I don't think that fits the data particularly well. Respondents from the 2015 cohort, would have had opportunities to take the survey at least 3 times, including 2018, before stopping in 2019, so it's hard to see why they specifically would be less likely to stop taking the EA Survey in 2019 compared to earlier EAs. Conversely EAs from before 2015 all the way back to 2009 or earlier, had at most 1 extra opportunity to be exposed to the EA Survey (we started in 2014), so it's hard to see why these EAs would be less likely to stop taking the EA Survey in 2019 having taken it in 2018.

In general, I expect the observation may have more than one explanation, including just random noise, but I think higher rates of dropout among particular more recent cohorts makes sense as an explanation, whereas these people specifically being more likely to take the EA Survey in 2018 and not in 2019 doesn't really.

Comment by david_moss on Growth and the case against randomista development · 2020-01-20T10:00:45.573Z · score: 3 (3 votes) · EA · GW

That's certainly true. I don't know exactly what they had in mind when they claimed that "most seem to be long-termists in some broad sense," but the 2019 survey at least has data directly on that question, whereas 2018 just has the best approximation we could give, by combining respondents who selected any of the specific causes that seemed broadly long-termist and Long Term Future lost out to Global Poverty using that method in both 2018 and 2019.*

*As noted in the posts, that method depends on the controversial question of what fine-grained causes should be counted as part of the 'Long Term Future' group. If Climate Change (the 2nd most popular cause in 2019, 3rd in 2018) were counted as part of LTF, then LTF would win by a mile. However, I am sceptical that most Climate Change respondents in our samples count as LTF in the relevant (EA) sense. i.e. normal (non-EA) climate change supporters who have no familiarity with LTF reasoning and think we need to be sustainable and think about the world 100 years or more in advance, seem quite different from long-termist EA (it seems they don't and generally would not endorse LTF reasoning about other areas). An argument against this is that that we see from the 2019 analysis, that people who selected Climate Change as a specific cause predominantly broke in favour of LTF when asked to select a broader cause area. I'm not sure how dispositive that is though. It seems likely to me that people who most support a specific cause other than Global Poverty (or Animals or Meta) would probably be more to select a broader, vaguer cause category, which their preferred cause could plausibly fit into (as Climate Change does into 'long term future/existential risk'), than one of the other specific causes, and as noted above, people might like the vague category of concern for the 'long term future' without actually supporting LTF the EA cause area. Some evidence for this comes from the other analyses in 2018 and 2019 which found that respondents who supported Climate Change were quite dissimilar from those who supported LTF causes in almost all respects (e.g. they tended to be newer to EA- very heavily skewed towards the most recent years- and less engaged with EA, generally following the same trends as Global Poverty and the opposite to AI, see here).

Comment by david_moss on Growth and the case against randomista development · 2020-01-16T11:16:35.915Z · score: 11 (8 votes) · EA · GW

Which cause is most popular depends on cause categorisation and most surveyed EAs seem to be long-termists in some broad sense. EA Survey 2018 Series: Cause Selection"

This is clearly fairly tangential to the main point of your post, but since you mention it, the more recent EA Survey 2019: Cause Prioritization post offers clearer evidence for your claim that most surveyed EAs seem to be long-termists, as 40.08% selected the 'Long Term Future / Catastrophic and Existential Risk Reduction' (versus 32.3% selecting Global Poverty) when presented with just 4 broad EA cause areas. That said, the claim in the main body of your text that "Global poverty remains a popular cause area among people interested in EA" is also clearly true, since Global Poverty was the highest rated and most often selected 'top cause' among the more fine-grained cause areas (22%).

Comment by david_moss on The Labour leadership election: a high leverage, time-limited opportunity for impact (*1 week left to register for a vote*) · 2020-01-15T10:21:41.829Z · score: 10 (7 votes) · EA · GW

I have to wonder whether EAs voting on the Labour leadership is positive in expectation. A priori, I would have expected it would be, but to my surprise, the EAs I know personally whose views on Labour politics I also know have not (in my view) had generally better views, been more thoughtful or more informed than the average Labour party members (I have been a Labour party member for some years). Nor have their substantive views seemed better to me, though of course this is more controversial (and this fact leads me to reduce my confidence in my own views considerably). Notably, the above is drawing from a reference class of people who were already quite engaged with Labour politics, things may be different (and perhaps worse) for the class of EAs who were not Labour party members, but who were persuaded their vote would be valuable by a forum post.

It also seems possible that votes by EAs generally being positive in expectation holds true for general elections, where choices are more stark and there is generally more consensus among EAs, and their votes are being compared against a wider reference class, but does not hold for more select votes about more nuanced issues, comparing against groups of relatively engaged and informed voters.

Comment by david_moss on [Link] Aiming for Moral Mediocrity | Eric Schwitzgebel · 2020-01-03T18:44:03.592Z · score: 5 (4 votes) · EA · GW

The first two sentences of his article "Aiming For Moral Mediocrity" are:

I have an empirical thesis and a normative thesis. The empirical thesis is: Most people aim to be morally mediocre. [I'm including this as a general reference for other readers, since you seem to have read the article yourself]

I take the fact that people systematically evaluate themselves as being significantly (morally) better than average, as strong evidence against the claim that people are aiming to be morally mediocre. If people systematically believed themselves to be better than average and were aiming for mediocrity, then they could (and would) save themselves effort and reduce their moral behaviour until they no longer thought themselves to be above average.

Note that the evidence Schwitzgebel cites for his empirical thesis doesn't show that "People behave morally mediocre" any more than it shows that people aim to be morally mediocre: it shows people's behaviour goes up or down when you tell them that a reference class is behaving much better or worse, but not that most people's behaviour is anywhere near the mediocre reference point. For example, in Cialdini et al (2006), 5% of people took wood from a forest when told that "the vast majority of people did not" and 7.92% did when told that "many past visitors" had (which was not a significant difference, as it happened). Unfortunately, the reference points "vast majority" and "many" are vague, but it doesn't suggest that most people are behaving anywhere near the mediocre reference point.

I recognise that Schwitzgebel acknowledges this "gap" between his evidence and his thesis in section 4, but I think he fails to appreciate that extent of the gap (near total) or that the evidence he cites can actually be seen as evidence against his thesis if we infer on the basis of these results that most people don't seem to be acting in line with the mediocre reference point.

In the "aiming for a B+" section you cite he actually seems to shift quite a bit to be more in line with my claim.

Here he suggests that "B+ probably isn’t low enough to be mediocre, exactly. B+ is good. It’s just not excellent. Maybe, really, instead of aiming for mediocrity, most people aim for something like B+ – a bit above mediocre, but shy of excellent." This is in line with my claim, that people take themselves to be above average morally and aim to keep sailing along at that level, but quite different from his claim previously that people "calibrate toward approximately the moral middle" and aim to be "so-so."

He reconciles this with the claim that people think of themselves and aim for above average (and "good") "most people who think they are aiming for B+ are in fact aiming lower." His passage doesn't make entirely clear what he means by that.

In the first instance he seems to suggest that people's beliefs are just mistaken about where they are really aiming (he gives the example of a student who professes to aim for a B+, but won't work harder if they get a C). But I don't see any reason to think that people are systematically mistaken about what moral standard they are really aiming at.

However, in a later passage he says "when I say that people aim for mediocrity, I mean not that they aim for mediocrity-by-their-own rationalized-self-flattering-standards. I mean that they are calibrating toward what is actually mediocre." Elsewhere he also says "It is also important here to use objective moral standards rather than people’s own moral standards." It's slightly unclear to me whether he means to refer to what is mediocre according to objective descriptive standards of how people actually behave, or according to objective normative standards i.e. what (Schwitzgebel thinks) is actually morally mediocre. If it's the former, we are back to the claim that although people think they are morally good and think they are aiming for morally good behaviour (according to their standards), they actually aim their behaviour towards median behaviour in their reference class (which I don't think we have any evidence for). If it's the latter then it's just the claim that the level of behaviour that most people actually end up approximating is mediocre (according to Schwitzgebel), which isn't a very interesting thesis to me.

Comment by david_moss on The Center for Election Science Year End EA Appeal · 2020-01-03T08:36:44.135Z · score: 1 (1 votes) · EA · GW

Thanks for the clarification, I strongly agree with the position described in this comment.

Comment by david_moss on [Link] Aiming for Moral Mediocrity | Eric Schwitzgebel · 2020-01-03T08:29:53.879Z · score: 6 (5 votes) · EA · GW

The evidence cited (people's behaviour is influenced by the behaviour of their peers) doesn't offer any evidence in favour of the "moral mediocrity thesis" (people aim to be morally mediocre).

I find the "slightly better than average" thesis more likely: people regard themselves as better than average morally (as they do in other domains, but even more strongly). And this view has actual empirical support e.g. Tappin and McKay (2017).

Comment by david_moss on EA Survey 2018 Series: Donation Data · 2020-01-01T18:19:44.999Z · score: 1 (1 votes) · EA · GW

Thanks for asking. Unfortunately, there weren't any climate change specific charities among the 15 specific charities which we included as default options for people to write in their donation amounts. That said, among the "Other" write-in option, there were 42/474 (8.86%) mentions of Cool Earth, so that was clearly a popular choice. There were no other frequently mentioned Climate Change charities.

As it happens, people who selected Climate Change as their top cause area also donated substantially less (median $358).

Comment by david_moss on The Center for Election Science Year End EA Appeal · 2019-12-31T07:42:45.607Z · score: 4 (3 votes) · EA · GW

On future generations, I favor thinking about possible institutional reforms which directly incentivize greater regard for future generations.

I'm curious why you say this given that you earlier noted the problem that a large part of the electorate "are in all likelihood systematically mistaken about the sort of policies that would advance their interests." Making people give more regard to future generations seems to be of extremely unclear value if they are likely to be systematically mistaken about what would serve the interests of future generations. This seems like a consideration in favour of interventions which aim to improve the quality of decision-making (e.g. via deliberative democracy initiatives) vs those which try to directly make people's decisions more about the far future (although of course, these needn't be done in isolation). But perhaps I am simply misunderstanding what you mean by "directly incentivize greater regard for future generations"?

Comment by david_moss on We're Rethink Priorities. AMA. · 2019-12-18T18:53:58.386Z · score: 4 (3 votes) · EA · GW

It sounds like you're thinking mostly about the animal sentience research, where I know there has been a lot of engagement with outside academic experts, but fwiw the empirical studies I work on also received a lot of external review from academics. They are very overlapping in method and content with my academic work (indeed, one of these projects is an academic collaboration and the other was an academic project I had been working on previously, that I decided made more sense to do under Rethink Priorities) and also a lot of the researchers in the EAA have backgrounds as academic researchers, so it's quite easy to find relevant expertise.

Comment by david_moss on Interaction Effect · 2019-12-16T16:49:28.724Z · score: 21 (8 votes) · EA · GW

Let's say someone goes into strategic AI research at the Future of Humanity Institute because this is proposed to be one of the most impactful career paths there is. In aiming for that career this person relied on the labour of several teachers. When the researcher is sick, they rely of the labour of doctors...

This doesn't seem to pose so much of a problem if you are trying to rank what is most valuable on the margin. Suppose every human activity is dependent on having at least one doctor and at least one farmer producing food, such that these are completely necessary for any other job to take place. It doesn't follow that we couldn't determine which job it would be most valuable to have one additional person working in. For example, if we already have enough doctors or farmers, even if these jobs are entirely necessary, we could still say that it is more valuable for a further person to work in a different field.

I think you've basically captured this with your artist example, although it's worth noting explicitly that how important art is on average, is different from its value on the margin, i.e. we could think that art or being a doctor or whatever, is the single most valuable human activity (on average) and still think that it would be more important for a particular person to go and work in another activity.

Comment by david_moss on Important EA-related questions EA would like to know from general public · 2019-12-15T19:31:05.160Z · score: 7 (5 votes) · EA · GW

I think questions about support for EA ideas in the general population would doubtless be interesting.

Unfortunately I think it is pretty difficult to ask questions about EA to the general public in an adequate manner. Since almost everyone is unfamiliar with EA ideas, statements of EA ideas are apt to be interpreted in line with more common folk ideas, rather than as expressing the EA ideas intended. For example, many statements of EA ideas ("We should only donate to the best effective charities" "We should do the most good we can do") can be interpreted completely platitudinously, so you find almost everyone agreeing with these statements even though almost no-one actually agrees with the ideas they are supposed to express. I think similar difficulties apply to asking whether people think those in the far future should be valued equally (see here and here)

Another specific problem is that almost no-one interprets "cost-effectiveness" correctly. I've run a number of studies examining how people think about thinking about cost-effectiveness in charitable decision-making, and I've found not only that most people naturally interpret "cost-effectiveness" to overhead ratios, but that even if you stipulate what cost-effectiveness means, and look at only those people who pass multiple comprehension checks putatively indicating correct understanding of the definition of cost-effectiveness, large percentages still cannot select which is the most "cost-effective charity" out of a pair of charities (A vs B) which save more lives with a given sum of money vs save fewer lives with the same sum of money but spend less on overhead costs.

I discuss this and some of the things I broadly think a good operationalization of EA should include here

That said, I'd be interested if you would ask people whether they agree or disagree with some statements along the lines of: "Some charitable causes are objectively better than others." "You can't compare whether different charitable causes are better or worse than each other."

Comment by david_moss on EA Meta Fund November 2019 Payout Report · 2019-12-15T09:49:33.082Z · score: 12 (5 votes) · EA · GW

Being discussed a lot, or even receiving a lot of positive online comments, is not a good reason to receive funding. I think it's really important to keep a high bar for charity evaluation and not play favourites just because the charity was started by 'one of our own' or has attracted a lot of attention on the EA Forum.

I don't think the previous comment can charitably be read as saying that 'it's been much discussed, so it should be funded'. I read them as saying that they "feel frustrated by lack of feedback", because the project is "one of the most discussed" and they've "read most of the related discussions on the forum and haven‘t seen a case made why the project isn‘t as promising as it might sound" and yet it still "prominently struggles for funding."

Comment by david_moss on We're Rethink Priorities. AMA. · 2019-12-13T21:01:07.916Z · score: 11 (9 votes) · EA · GW

I think some of the main ways will be:

  • Generating new cause areas/interventions/charities
  • Moving investment away from ineffective interventions
  • Causing there to be a lot more active EA researchers

I don't think those will differ too much across the short/long term, except that shifting resources away from bad interventions may happen more in the short term.

Comment by david_moss on We're Rethink Priorities. AMA. · 2019-12-13T17:22:27.486Z · score: 10 (5 votes) · EA · GW

My ethical and philosophical views haven't changed a huge amount.

I've become even less confident in most EA interventions than I was (and I started out very unconfident). I think there are various plausible reasons why most EA activities could easily turn out to be net negative. I don't know whether I have become more or less confident about research specifically in recent years in absolute terms, but it's definitely become relatively more appealing (as a relatively robust strategy) as a result.

Comment by david_moss on We're Rethink Priorities. AMA. · 2019-12-13T17:13:37.180Z · score: 17 (7 votes) · EA · GW

Like a lot of EAs, I first became convinced of these ideas through Peter Singer.

I first read him, and Famine, Affluence and Morality in particular, when I was doing A Level religious studies in 2004 and from then on was convinced that we are obliged to give all excess wealth to the most effective charities (or do something else if that was more effective of course).

I then went to study Philosophy, directly inspired by this, and spent a lot of time telling anyone who would listen about these ideas with no effect whatsoever. I was particularly shocked and appalled that not only did none of the philosophers I encountered take these or any other actually-oriented-at-helping-the-world ideas seriously, but none seemed to take utilitarianism seriously. I was therefore pretty delighted to see Toby Ord announce his intention to give everything he earned over a certain amount to charity, since I thought all philosophers should be doing this, and I've been following EA ever since.

As to what now keeps me working on EA: it would be the awareness of the manifold terrible horrors (horrors so great that any individual experiencing them would pay almost any price to avoid them) constantly occurring in the world.

Comment by david_moss on We're Rethink Priorities. AMA. · 2019-12-13T15:51:20.469Z · score: 13 (7 votes) · EA · GW

I take moral uncertainty extremely seriously, but my most preferred theory is classical hedonic utilitarianism. My most prominent uncertainties are about non-utilitarian consequentialisms which would include non-experiential goods (not preference utilitarian) and downside focused views.

I'm almost entirely uncertain about prescriptive metaethics. As someone who's pretty Wittgensteinian, I'm inclined to see much debate between metaethical theories as confused. The metaethical views which (fwiw) strike me as most appealing don't have too much to differentiate them in practice, i.e. versions of softer realism or anti-realism which give a central role to what would actually be rationally endorseable by humans in certain conditions.

My descriptive metaethical views, however, are that folk metaethical discourse and judgement is almost entirely indeterminate with regards to philosophers' metaethical theories. i.e. there is no determinate fact of the matter as to whether folk views best fit realism/anti-realism, objectivism/relativism etc., and that the folk evince a contextually and inter-personally variable mix of conflicting metaethical commitments or proto-commitments. (e.g. Gill, 2009, Don Loeb (2008) and their exchanges).

Although I think the view that there are no ethical obligations should be taken seriously, I certainly (inside view) view altruism as an obligation and I think if I were convinced of anti-realism and/or moral nihilism, I think I would likely continue to view it in a very obligation-like way.

Comment by david_moss on We're Rethink Priorities. AMA. · 2019-12-13T15:30:00.660Z · score: 6 (4 votes) · EA · GW

I would be doing more of my academic research, at present Rethink Priorities definitely accounts for most of my time. I would probably also be doing more work for Rethink Charity and Charity Entrepreneurship, who I had been working for, but reduced/stopped my hours to work more for Priorities.

Comment by david_moss on Local EA Group Organizers Survey 2019 · 2019-11-17T20:20:29.884Z · score: 13 (5 votes) · EA · GW

Since CEA can't constantly give personal support or feedback to all groups, I think CEA can instead help newer groups get connected with older groups to get feedback and advice from them. Or, newer groups can proactively seek out advice from more established groups.

I agree that this kind of scheme could be useful. Indeed, in this survey a number of organizers (8) noted that they’d like to see more communication between fellow organizers in the “other services or kinds of support that you would like to see" open comment question. Formal peer mentoring and group calls (of organisers from the same region or whose groups share other attributes) have been tried several times with varying results. I expect that direct support from CEA still has an important role to play for a variety of reasons: these calls may be especially reassuring to (some) organisers relative compared to those from other organisers, a central coordinator is often going to be better placed to connect people with different resources, it’s probably easier to ensure that direct calls actually continue happening than with a dispersed mentorship scheme etc.

Here are questions I'd love to know the answers to - maybe some of these could be included in future surveys:

Incidentally we included both of these questions in the 2017 LGS, but they were cut due to space. We can certainly bear them in mind for the next LGS.

It would be great to see if there's a correlation between the EA outcome metrics with the number of hours per week spent community building.

In 2017 data there were moderate significant positive correlations between hours per week spent “organising EA activities” and how many members became “actively committed” to EA as a result of the group’s activities, counterfactual pledges and EA influenced career choices (all log transformed). It is difficult to infer much from this though, since it seems quite plausible that there could be reverse causation (people spending more time on larger, more active groups) or some more complex causal story, rather than more hours spent simply causing stronger outcomes.

What problems did you experience while organizing for your local group over the past year?

We explored this somewhat in the qualitative report from the Local Group Survey in 2017, which discussed organisers’ insecurities, difficulties with productivity and accountability and lack of impactful activities for members to do among various other problems.

We also asked people “What are the main challenges your group faces?” and coded these responses.

As the graph below shows, the most commonly cited challenge was difficulty with recruitment, closely followed by lack of time. Lack of funding, members leaving, lack of dedicated members and difficulty getting members actively engaged in high impact activities were also commonly mentioned.

This year’s survey found a similar pattern in responses to the question about reasons why organizers expected their group might end when the current organizers left: difficulties recruiting, members leaving and lack of time.

Comment by david_moss on Which Community Building Projects Get Funded? · 2019-11-16T10:39:04.400Z · score: 16 (9 votes) · EA · GW

The Local Group Organizers Survey is now out here.

I will add that looking at the % of groups in each region somewhat understates Europe's size as a proportion of the community. 50% (88/176) of groups are in Europe, but European groups account for somewhat higher percentages of individuals who engaged with an EA group (62.25%), new attendees (71.94%), and group members considered "highly engaged in the EA community" (56.56%). As such the amount of funding that we might expect to see going to Europe were things proportionate, might well be higher than 50%.

The 2018 EA Survey's post on geographic differences also has some more detailed information than the early demographics post. I think this map from that post highlights the concentration of EAs in Europe and a small number of locations in the US quite well. In the 2019 EA Survey post on geography we'll be repeating these analyses while also looking at different levels of engagement across regions.

Comment by david_moss on Healthy Competition · 2019-11-03T21:01:29.813Z · score: 15 (8 votes) · EA · GW

Fwiw, I think there are often a lot of benefits of having more than one org (rather than just one org with more resources) working in the same area which are unrelated to competition of between orgs any kind and these have often been neglected in the EA community. ("Neglected" is under-stating it, since I think the tendency has been for people to be very strongly of the opposite view i.e. committed to the position that there are very large harms for there being more than one org working in an area). Of course, sometimes there are benefits to there only being one org.

Diversity of views, as you mention is one, but diversity of approaches increasing experimentation and learning value is another, as is community members who need the services the orgs provide having options if, for whatever reason, the one org doesn't work well for them. However, I think one of the biggest benefits is having a backup, in case the one org can't or doesn't for whatever reason provide what is needed by the community in a given case.

Comment by david_moss on The (un)reliability of moral judgments: A survey and systematic(ish) review · 2019-11-02T15:32:48.253Z · score: 7 (5 votes) · EA · GW

Many thanks for completing this thorough review.

A few fairly general comments:

  • I find the higher level evidence that suggests our moral judgments would tend to be unreliable more persuasive than the many individual examples of judgments apparently being influenced by morally irrelevant factors. By higher level evidence I mean the broadly evolutionary arguments about the adaptive function of moral thinking. Of course, such evolutionary debunking arguments are a topic of ongoing debate (Millhouse, Bush & Moss, 2016).
  • One reason I find the evidence offered by lots of specific instances of apparent influence by morally irrelevant factors is that there's reason to expect the literature to be systematically biased towards producing and reporting results showing such influences. Researchers in this area are on the whole collectively trying to generate results showing weird factors influencing moral judgement since these are publishable, whereas results showing moral judgement responding as we'd expect to relevant factors would generally not be (arguably the raison d'être of social psychology is finding strong counter-intuitive influences of social/contextual factors on human action). Even setting aside concerns about the validity of these published results, I would expect this collected direct evidence to therefore give an impression of pervasive rationally irrelevant influences on moral judgement even if our judgments were generally highly reliable.
  • I think there are some good reasons to think that the ecological validity challenge to these experimental results, which you mention, is pretty strong: related to the Gigerenzer ecological rationaly strategy which you mention, one might think that some of the apparent irrational biases found in the experimental literature are as as result of people's judgement being highly sensitive to pragmatic factors which would be of relevance in practical contexts, but which are treated as irrational in the context of the experiment. For example, the famous Knobe Effect (showing that moral judgments irrationally influence whether we judge that someone intended to do something or not) seems entirely in terms of the pragmatics (in real world contexts) of saying x intended a good/bad thing. (Adams and Steadman, 2004)
  • That said, despite my scepticism of these experimental results for establishing that there is pervasive bias in moral judgements (which I think is independently extremely plausible), I do think that more empirical psychological research into EA-relevant judgments would be likely to be high value since, done well, it can highlight potential errors and biases which we would otherwise be aware of (as with more general heuristics and bias research).
Comment by david_moss on EA Hotel Fundraiser 5: Out of runway! · 2019-10-25T20:03:53.655Z · score: 27 (17 votes) · EA · GW

This is pretty sad to hear. I would already have used the EA Hotel at least once this year (when I moved back to the UK, before I found somewhere to rent) and more recently when looking for somewhere to rent in Blackpool (as it happens, I expect I may be renting in Blackpool in the near future, partly because the EA Hotel is there), were there any space in the EA Hotel. So I would like to see the EA Hotel expanding, rather than at risk of shutting down.

Comment by david_moss on EA Hotel Fundraiser 5: Out of runway! · 2019-10-25T19:53:19.005Z · score: 11 (9 votes) · EA · GW

I think that would end up missing most of the counterfactual value... It kind of goes against most of the point of the project (like trying to save a scholarship by asking the recipients to pay).

There could be of significant value to some people to have subsidised much cheaper than usual rent (in a hotel with a ready-made dedicated EA community), even if it's not free. Of course, it's a further question whether there are enough such people to sustain the hotel in the short term, if the hotel transitions away from fully covering expenses.

I think it would be interesting to see how many current/potential guests could/would pay some small sum. Going forward, one could also have some kinds of honour-based system, where people indicate whether they would be able to pay some rent while staying at the hotel or whether they would require full coverage plus a stipend.

Comment by david_moss on Older people may place less moral value on the far future · 2019-10-25T10:17:57.135Z · score: 2 (2 votes) · EA · GW

but if a charity approaches me and offers to save 500 lives in 500 years for a small donation, that's definitely a scam! So I think there are really good reasons why people's intuitions on this don't always match what mathematicians or philosophers might think.

I would note that the tradeoff question we asked didn't ask about donating to a charity in order to save lives 500 years in the future, they asked whether it's "morally better" to save 1 person now or x people in the future. I agree that degree of confidence in outcomes might influence people's judgements about the charity cases though.

Comment by david_moss on Please Take the 2019 EA Survey! · 2019-10-16T15:14:07.092Z · score: 3 (2 votes) · EA · GW

SurveyMonkey tracks and reports completion time and this was the average time at the point this was posted on the EA Forum (which was after several hundred respondents had already taken the Survey via other means). The median time spent, as it stands now, is still 20 minutes.

Comment by david_moss on Please Take the 2019 EA Survey! · 2019-10-01T20:14:00.578Z · score: 4 (3 votes) · EA · GW

I think it would be best for you to report donations made in calendar year 2019 in the forthcoming 2020 EA Survey (that will ask about 2019 donations) and for this year enter 0

Just to clarify: you can mention the donations you have already made in 2019 in the "In 2019, how much do you currently plan to donate?" box (along with any other donations you plan to make). Then if you didn't make any donations in 2018 as a result, you can write '0' in the "In 2018, roughly how much money did you donate?" box, and then mention bunching in the open comment.

Comment by david_moss on The Long-Term Future: An Attitude Survey · 2019-09-17T07:51:16.310Z · score: 39 (16 votes) · EA · GW

Agreed. As I mentioned in this comment, people will tend to be inclined to agree with any generally positive sounding platitude, due to acquiescence bias and plausibly social desirability bias. On the whole, I would expect people to be extremely reluctant to explicitly deny that some people "matter just as much as" as others if the affirmative is put to them. This all may especially be a problem when the issues in question are ones people haven't really thought about before and so don't have clear attitudes- this will be particularly likely to elicit just superficial agreement.

I think one of the best approaches to ameliorate this is to use reversed statements i.e. ask people whether they agree with an item expressing the opposite attitude (i.e. that people who are alive here and now matter more). Sanjay should be posting a report of the results when we did this fairly soon. Quite often you will find that people will agree with statements expressing both an attitude and a statement designed to capture the exact opposite view, and you then need to work to find a set of items that together actually seems to meaningfully capture the attitude of interest.