Posts

EA Survey Series 2019: EA cities and the cost of living 2020-07-06T08:39:45.572Z · score: 45 (21 votes)
EA Survey Series 2019: How many EAs live in the main EA hubs? 2020-07-06T08:39:22.777Z · score: 38 (13 votes)
EA Survey 2019 Series: How many people are there in the EA community? 2020-06-26T09:34:11.051Z · score: 70 (29 votes)
Fewer but poorer: Benevolent partiality in prosocial preferences 2020-06-16T07:54:04.065Z · score: 36 (21 votes)
EA Survey 2019 Series: Community Information 2020-06-10T16:26:59.250Z · score: 79 (25 votes)
EA Survey 2019 Series: How EAs Get Involved in EA 2020-05-21T16:28:12.079Z · score: 109 (38 votes)
Empathy and compassion toward other species decrease with evolutionary divergence time 2020-02-21T15:31:26.309Z · score: 37 (17 votes)
EA Survey 2018 Series: Where People First Hear About EA and Influences on Involvement 2019-02-11T06:05:05.829Z · score: 34 (17 votes)
EA Survey 2018 Series: Group Membership 2019-02-11T06:04:29.333Z · score: 34 (13 votes)
EA Survey 2018 Series: Cause Selection 2019-01-18T16:55:31.074Z · score: 69 (29 votes)
EA Survey 2018 Series: Donation Data 2018-12-09T03:58:43.529Z · score: 82 (37 votes)
EA Survey Series 2018 : How do people get involved in EA? 2018-11-18T00:06:12.136Z · score: 50 (28 votes)
What Activities Do Local Groups Run 2018-09-05T02:27:25.247Z · score: 23 (18 votes)

Comments

Comment by david_moss on My Meta-Ethics and Possible Implications for EA · 2020-08-05T19:12:10.116Z · score: 2 (1 votes) · EA · GW

The learned meaning of moral language refers to our recollection/reaction to experiences. These reactions include approval, preferences and beliefs... Preferences enter the picture when we try to extend our use of moral language beyond the simple cases learned as a child. When we try to compare two things that are apparently both bad we might arrive at a preference for one over the other, and in that case the preference precedes the statement of approval/disapproval.

Thanks for the reply. I guess I'm still confused about what specific attitudes you see as involved in moral judgments, whether approval, preferences, beliefs or some more complex combination of these etc. It sounds like you see the genealogy of moral terms as involving a melange of all of these, which seems to leave the door quite open as to what moral terms actually mean.

It does sound though, from your reply, that you do think that moral language exclusively concerns experiences (and our evaluations of experiences). If so, that doesn't seem right to me. For one, it seems that the vast majority of people (outside of welfarist EA circles) don't exclusively or even primarily make moral judgements or utterances which are about the goodness or badness of experiences (even indirectly). It also doesn't seem to me like the kind of simple moral utterances which ex hypothesi train people in the use of moral language at an early age primarily concern experiences and their badness (or preferences for that matter). It seems equally if not more plausible to speculate that such utterances typically involve injunctions (with the threat of punishment and so on).

Thanks for bringing up the X,Y,Z point; I initially had some discussion of this point, but I wasn't happy with my exposition, so I removed it. Let me try again: In cases when there are multiple moral actors and patients there are two sets of considerations. First, the inside view, how would you react as X and Y. Second, the outside view, how would you react as person W who observes X and Y. It seems to me that we learn moral language as a fuzzy mixture of these two with the first usually being primary.

Thanks for addressing this. This still isn't quite clear to me i.e. what exactly is meant by 'how would you react as person W who observes X and Y'? What conditions of W observing X and Y are required?. For example, does it only specifically refer to how I would react if I were directly observing an act of torture in the room or does it permit broader 'observations' i.e. I can observe that there is such-and-such level of inequality in the distribution of income in a society. The more restrictive definitions don't seem adequate to me to capture how we actually use moral language, but the more permissive ones, which are more adequate, don't seem to suffice to rule out me making judgements about the repugnant conclusion and so on.

Much as with population ethics, I suspect this endeavor should be seen as... beyond the boundary of where our use of language remains well-defined.

I agree that answers to population ethics aren't directly entailed by the definition of moral terms. But I'm not sure why we should expect any substantive normative answers to be implied by the meaning of moral language. Moral terms might mean "I endorse x", but any number of different considerations (including population ethics, facts about neurobiology) might be relevant to whether I endorse x (especially so if you allow that I might have all kinds of meta-reactions about whether my reactions are based on appropriate considerations etc.).

Comment by david_moss on Where the QALY's at in political science? · 2020-08-05T10:07:52.363Z · score: 4 (3 votes) · EA · GW

Effective Thesis has some suggested topics within political science.

Comment by david_moss on Replaceability Concerns and Possible Responses · 2020-08-04T16:17:44.703Z · score: 9 (3 votes) · EA · GW

It is somewhat surprising the EA job market is so competitive. The community is not terribly large. Here is an estimate...This suggests to me a very large fraction of highly engaged EAs are interested in direct work.

We have data from our careers post which addresses this. 688 (36.6% of respondents to that question) indicated that they wanted to pursue a career in an EA non-profit. That said, this was a multi-select question so people could select this alongside other options. Also 353 people reported having applied to an EA org for a job. There were 207 people who indicated they currently work at an EA org which, if speculatively we take that as a rough proxy for current positions, suggests a large mismatch between people seeking positions and total positions. 

Of those who included EA org work within their career paths and were not already employed in an EA org, 29% identified as "highly engaged" (defined with examples such as having worked in an EA org or leading a local group). A further 32% identified with the next highest level of engagement, which includes things like "attending an EA Global conference, applying for career coaching, or organizing an EA meetup." Those who reported applying for an EA org job were yet more highly engaged: 37.5% "highly engaged" and 36.4% the next highest level of engagement.

Comment by david_moss on My Meta-Ethics and Possible Implications for EA · 2020-08-04T10:37:28.339Z · score: 3 (2 votes) · EA · GW

Thanks for the post.  

I found myself having some difficulty understanding the core of your position. Specifically, I'm not sure whether you're claiming that the meaning of moral language is to do with how we would react (what we would approve/disapprove of) in certain scenarios or whether you are specifically claiming that moral language is about experiences and our reactions if we were to experience certain things or even, specifically, what we would prefer if we were to experience certain things or what we would believe if we experienced certain things.

Note that there are lots of variations within the above categories, of course. For example, if morality is about what we would believe if we lived the relevant experiences, it's not clear to me whether this means what I would believe about whether X should torture Y, if I were Y being tortured, if I were X torturing Y, or if I were Z who had experienced both and then combined that with my own moral dispositions etc.

Either way, I'm not sure that the inclusion of meta-reactions and the call to universality (which I agree are necessary to make this form of expressivism plausible) permit the conclusions you draw.

For example you write: "it seems that personal experience with animals (and their suffering) becomes paramount overriding evidence from neuron counts, self-awareness experiments and the like." But if you allow that I can be concerned with whether my own reactions are consistent, impartial and proportionate to others' bad experiences, then it seems like I can be concerned with whether helping chickens or helping salmon causes there to be fewer bad experiences, or with whether specific animals are having negative experiences at all. And if so. it seems like I should be concerned about what the evidence from neuron counts, self-awareness experiments etc. would tell us about the extent to which these creatures are suffering. Moral claims being about what my reactions would be in such-and-such circumstance doesn't give me reason to privilege my actual reactions upon personal experiences (in current circumstances). Doing so seems to imply that when I'm thinking about whether, say, swatting a fly is wrong, I should simply ask myself what my reactions would be if I swatted a fly; but that doesn't seem plausible as an account of how we actually think morally, where what I'm actually concerned about (inter alia) is whether the fly would be harmed if I swatted it.

Comment by david_moss on 3 suggestions about jargon in EA · 2020-07-07T09:50:07.003Z · score: 4 (2 votes) · EA · GW

Academia, especially in the social sciences and humanities, also strikes me as being extremely pro-concealment (either actively or more commonly passively, by believing we should not gather information in the first place) on topics which they actually view as objectionable for explicitly altruistic reasons.

Comment by david_moss on Resources to learn how to do research · 2020-07-04T11:05:49.272Z · score: 13 (6 votes) · EA · GW

If you are interested in EA research/an EA research job, I would recommend just reading EA research on this forum and on the websites of EA research organisations. Much of this research doesn't involve any research method beyond general desk/secondary research, i.e. reading relevant literature and synthesising it.

In the cases where you see EA research relies on some specific technical methodology, such as stats, cost-effectiveness modelling, surveys etc., I would just recommend googling the specific method and finding resources that way. In general, I think there are too many different methods and approaches even within these categories, for it to be too helpful to link to a general introduction to stats (although here's one, for example, since depending on what you want to do, a lot won't be relevant.

Comment by david_moss on EA Survey 2019 Series: How many people are there in the EA community? · 2020-07-04T09:34:48.356Z · score: 8 (3 votes) · EA · GW

I think "been influenced by EA to do EA-like things" covers a very wide array of people.

In the most expansive sense, this seems like it would include people who read a website associated with EA (this could be Giving What We Can, GiveWell, The Life You Can Save or ACE or others...) decide "These sound like good charities" and donate to them. I think people in this category may or may not have heard of EA (all of these mention effective altruism somewhere on the website) and they may even have read some specific formulation that expresses EA ideas (e.g. "We should donate to the most effective charity") and decided to donate to these specific charities as a result. But they may not really know or understand what EA means (lots of people would platitudinously endorse 'donating to to the best charities') or endorse it, let alone identify with or be involved with EA in any other way.

I agree that there are many, many more people who are in this category. As we note in footnote 7, there are literally millions of people who've read the GiveWell website alone, many of whom (at least 24,000) will have been moved to donate. Donating to a charity influenced by EA principles was the most commonly reported activity in the EA survey by a long way, with >80% of respondents reporting having done so, and >60% even among the second lowest level of engagement.

I think we agree that while getting people to donate to effective charities is important (perhaps even more impactful than getting people to 'engage with the effective altruism community' in a lot of cases) these people, don't count as part of the EA community in the sense discussed here. But I think they also wouldn't count as part of the "wider network of people interested in effective altruism" that David Nash refers to (i.e. because many of them aren't interested in effective altruism).

I think a good practical test would be: if you went to some of these people who were moved to donate to a GiveWell/ACE etc. charity and said "Have you heard that many adherents of effective altruism, believe that we should x?", if their response is some variation on "What's that?" or "Why should I care?" then they're not part of the community or network of people interested in EA. I think this is a practically relevant grouping because this tells you who could 'be influenced by EA to do EA things', where we understand "influenced by EA" to refer to EA reasoning and arguments and "EA things" to refer to EA things in general, as opposed to people who might be persuaded by an EA website to do some specific thing which EAs currently endorse but who would not consider anything else or consider maximising effectiveness more generally.

Comment by david_moss on EA Survey 2019 Series: How many people are there in the EA community? · 2020-07-02T08:08:59.730Z · score: 5 (3 votes) · EA · GW

Thanks for the reply!

So then it is a question of whether action or identification is more important-I would favor action.

This is the kind of question I had in mind when I said: "Of course, being part of the “EA community” in this sense is not a criterion for being effective or acting in an EA manner- for example, one could donate to effective charity, without being involved in the EA community at all..."

It seems fairly uncontroversial to me that someone who does a highly impactful, morally motivated thing, but hasn't even heard of the EA community, doesn't count as part of the EA community (in the sense discussed here).

I think this holds true even if an activity represents the highest standard that all EAs should aspire to. The fact that something is the highest standard that EAs should aspire to doesn't mean that many people might not undertake the activity for reasons unrelated to EA, and I think those people would fall outside the "EA community" in the relevant sense, even if they are doing more than many EAs.

Comment by david_moss on EA Survey 2019 Series: How many people are there in the EA community? · 2020-07-01T19:42:44.842Z · score: 2 (1 votes) · EA · GW

I agree this would both not be very inspiring and risk sounding elitist. I don't have any novel ideas, I would probably just say something vague about wanting to spread the ideas carefully and ensure they aren't lost or distorted in the mass media and try to redirect the topic.

Comment by david_moss on EA Survey 2019 Series: How many people are there in the EA community? · 2020-07-01T19:39:55.639Z · score: 3 (2 votes) · EA · GW

We'll be addressing this indirectly in the next couple of posts as it happens.

Comment by david_moss on Dignity as alternative EA priority - request for feedback · 2020-06-26T17:45:21.317Z · score: 18 (7 votes) · EA · GW

I'm not entirely clear as to whether you are applying the INT/neglectedness, solvability and scale framework to dignity as a fundamental value or to dignity-promotion as a cause area for EA (according to EA values, however we determine them).

The INT framework is usually applied as a heuristic for broad cause area selection and I don't think it works well as a heuristic for determining fundamental values. Things which are valuable are fundamentally valuable even if they are not neglected and estimating their Importance/Scale seems crucially to depend on whether and how far they are fundamentally valuable, even if they affect lots of people. Maybe it would be helpful to think more about which potential values are neglected or likely to be more or less tractable to satisfy, in order to determine whether we should dedicate more resources to trying to satisfy them, but I don't think just quickly running through the INT heuristic will be that informative.[^1]

If it's applied to the idea of dignity-promotion as a cause area (according to EA values), then it seems like we should judge it based on all our values (which for many EAs will largely determined by how well it promotes welfare, with small amounts of weight given to other values, such as dignity itself). It's not so clear that promoting-dignity performs well in those terms.

[^1] For example, I think that many minority/peripheral values that we could think up would be highly neglected, affect a lot of people, and be tractable, but this doesn't tell us much about their moral importance.

Comment by david_moss on Is it suffering or involuntary suffering that's bad, and when is it (involuntary) suffering? · 2020-06-22T18:38:30.708Z · score: 5 (3 votes) · EA · GW

My intuition is that suffering is bad, but sometimes (all things considered) I prefer to suffer in a particular instance (e.g. in service of some other value). In such cases it would be better for my welfare if I did not suffer, but I still prefer to.

I also think that in cases where one voluntarily suffers, then this can reduce the suffering involved. Relatedly, I also imagine that voluntarily experienced pain may lead to less suffering than coerced pain.

It also seems to me that there are cases where we directly want to experience a suffering-involving experience (e.g. watching tragedies and wanting to experience the feeling of tragedy). I think in many of these cases the experience is sad, but also involves (subtle) pleasures and what we want to experience is this combined set of emotions. In some such cases I'm sure people would prefer to experience the distinctive melancholy-pleasure emotion without the suffering valence if they could (but cannot imagine, let alone actually achieve this), and in other cases people would not with to detach the suffering from the emotion set (because they have preferences to have fitting responses to tragedy and so forth). I am sure there are a whole bunch of other factors which explain people propensity to voluntarily watch tragedies though e.g. affective forecasting misfires, instrumental goals like signalling, and feelings of compulsion (tragedies tend to be very salient and so adaptive to pay attention to, even if they entail suffering).

Comment by david_moss on How much do Europeans care about fish welfare? (An analysis of relevant surveys) · 2020-06-22T16:03:21.885Z · score: 15 (8 votes) · EA · GW

It seems like it would be valuable for advocates to better understand what level of support is necessary to undergird changes (whether through legislative efforts or through corporate campaigns/consumer pressure). Much progress seems to have been made on chickens, as you note, with only ~77% of people believing their welfare should at least "probably" be better protected. But it seems like we don't know what level of support is required, or even really how such support causally influences progress. The influence of such support seems like it may well be mediated by (decision-makers') perceptions of support, which is probably much vaguer.

Comment by david_moss on EA Forum feature suggestion thread · 2020-06-20T08:36:35.209Z · score: 29 (14 votes) · EA · GW

It would very dramatically improve my experience of the Forum if there were the option to hide posts. This would mean that the first page of the Forum would always be posts that were relevant to me. As it stands, whenever I visit the Forum most of the posts which I can see are not relevant to me (perhaps because I've already read them and don't want to read them again or check in on the ongoing discussion), whereas posts which are relevant to me and which I would want to visit again are invisible if they are more than a few days old.

Comment by david_moss on EA Survey 2019 Series: Community Information · 2020-06-13T10:58:59.510Z · score: 4 (2 votes) · EA · GW

Thanks for your comment Max!

it's unclear to me if respondents interpreted "EA job" as (a) "job at an EA organization" or (b) "high-impact job according to EA principles"

I agree. This was one of the questions which was requested externally, that I mentioned at the top of the post, which I included verbatim, so I don't know which was the intended meaning. The precise wordings were "Too hard to get an EA job" and "Not enough job opportunities that seemed like a good fit for me" which I agree could be interpreted more narrowly or more broadly.

To perhaps gain a little insight, we can cross-reference this with our data on respondents' career plans. Among those that included 'Work at an EA non-profit' in their plans (note that this was a multi-select question), 35.7% said that "Too hard to get an EA job" was a barrier to being more involved in EA. Conversely, among those that did not include working for an EA non-profit in their career plan, 20.1% selected this as a barrier. This is a significant difference (p<0.001), but notably it means that many participants who selected this as a barrier were not aiming to work in a specifically EA org. In fact, to put it another way, 49.3% of those who selected this as a barrier did not say they planned to work in an EA non-profit, whereas 50.7% did plan to work in an EA org (but note that many of these also included other routes, like academia, in their career plans, so it's not clear that it being too hard to work in an EA org specifically was what they viewed as a barrier). Of course, it's also possible that for some of these respondents it was because it was too hard to get a job in an EA org, which they viewed as a barrier, that they did not select EA org as part of their career plans.

Comment by david_moss on Some thoughts on deference and inside-view models · 2020-06-03T10:59:51.642Z · score: 7 (4 votes) · EA · GW

The common attitude was something like "we're utilitarians, and we want to do as much good as we can. EA has some interesting people and interesting ideas in it. However, it's not clear who we can trust; there's lots of fiery debate about cause prioritization, and we just don't at all know whether we should donate to AMF or the Humane League or MIRI. There are EA orgs like CEA, 80K, MIRI, GiveWell, but it's not clear which of those people we should trust, given that the things they say don't always make sense to us, and they have different enough bottom line beliefs that some of them must be wrong." It's much rarer nowadays for me to hear people have an attitude where they're wholeheartedly excited about utilitarianism but openly skeptical to the EA "establishment".

I actually agree that there seems to have been some shift roughly along these lines.

My view is roughly that EAs were equally disposed to be deferential then as they are now (if there were a clear EA consensus then, most of these EAs would have deferred to it, as they do now), but that "because the 'official EA consensus' (i.e. longtermism) is more readily apparent" now, people's disposition to defer is more apparent.

So I would agree that some EAs were actually more directly engaged in thinking about fundamental EA prioritisation because they did not see an EA position that they could defer to at all. But other EAs I think were deferring to those they perceived as EA experts back then, just as they are now, it's just that they were deferring to different EA experts than other EAs. For example, I think earlier years many EAs thought that Giving What We Can (previously an exclusively poverty org, of course) and GiveWell, were the EA experts, and meanwhile there were some 'crazy' people (MIRI and LessWrongers) who were outside the EA mainstream. I imagine this perspective was more common outside the Bay Area.

I feel like there are many fewer EA forum posts and facebook posts where people argue back and forth about whether to donate to AMF or more speculative things than there used to be.

Agreed, but I can't remember the last time I saw someone try to argue that you should donate to AMF rather than longtermism. I've seen more posts/comments/discussions along the lines of 'Are you aware of any EA arguments against longtermism?' Clearly there are still lots of EAs who donate to AMF and support near-termism (cause prioritisation, donation data), but I think they are mostly keeping quiet. Whenever I do see near-termism come up, people don't seem afraid to communicate that they think that it is obviously indefensible, or that they think even a third-rate longtermist intervention is probably incomparably better than AMF because at least it's longtermist.

Comment by david_moss on What are some good charities to donate to regarding systemic racial injustice? · 2020-06-02T08:48:58.373Z · score: 9 (8 votes) · EA · GW

I didn't downvote it, but some commenters might have done because an almost identical question was asked a few days ago.

Comment by david_moss on EA Survey 2019 Series: How EAs Get Involved in EA · 2020-05-28T18:16:23.171Z · score: 6 (3 votes) · EA · GW

I just added him a mention of this to the bullet point about these open comments.

Comment by david_moss on Some thoughts on deference and inside-view models · 2020-05-28T10:38:56.198Z · score: 19 (10 votes) · EA · GW

Most of us had a default attitude of skepticism and uncertainty towards what EA orgs thought about things. When I talk to EA student group members now, I don’t think I get the sense that people are as skeptical or independent-thinking.

I've heard this impression from several people, but it's unclear to me whether EAs have become more deferential, although it is my impression that many EAs are currently highly deferential. It seems quite plausible to me that it is merely more apparent that EAs are highly deferential right now, because the 'official EA consensus' (i.e. longtermism) is more readily apparent. I think this largely explains the dynamics highlighted in this post and in the comments. (Another possibility is simply that newer EAs are more likely to defer than veteran EAs and as EA is still growing rapidly, we constantly get higher %s of non-veteran EAs, who are more likely to defer. I actually think the real picture is a bit more complicated than this, partly because I think moderately engaged and invested EAs are more likely to defer than the newest EAs, but we don't need to get into that here).

My impression is that EA culture and other features of the EA community implicitly encourage deference very heavily (despite the fact that many senior EAs would, in the abstract, like more independent thinking from EAs). In terms of social approval and respect, as well as access to EA resources (like jobs or grants), deference to expert EA opinion (both in the sense of sharing the same views and in the sense of directly showing that you defer to senior EA experts) seem pretty essential.

I have the sense that people would now view it as bad behavior to tell people that you think they’re making a terrible choice to donate to AMF

Relatedly, my purely anecdotal impression is basically the opposite here. As EA has professionalised I think there are more explicit norms about "niceness", but I think it's never been clearer or more acceptable to communicate implicitly or explicitly, that you think that people who support AMF (or other near-termist) probably just 'don't get' longtermism and aren't worth engaging with.

Comment by david_moss on EA Survey 2019 Series: How EAs Get Involved in EA · 2020-05-27T18:36:59.190Z · score: 6 (3 votes) · EA · GW

Thanks Jon.

I agree Peter Singer is definitely still one of the most important factors, as our data shows (and as we highlighted last year. He's just not included in the bullet point in the summary you point to because that only refers to the fixed categories in the 'where did you first hear about EA?' question.

In 2018 I wrote "Peter Singer is sufficiently influential that he should probably be his own category", but although I think he deserves to be his own category in some sense, it wouldn't actually make sense to have a dedicated Peter Singer category alongside the others. Peter Singer usually coincides with other categories i.e. people have read one of his books, or seen one of his TED Talks, or heard about him through some other Book/Article or Blog or through their Education or a podcast or The Life You Can Save (org) etc., so if we split Peter Singer out into his dedicated category we'd have to have a lot of categories like 'Book (except Peter Singer)' (and potentially so for any other individuals who might be significant) which would be a bit clumsy and definitely lead to confusion. It seems neater to just have the fixed categories we have and then have people write in the specifics in the open comment section and, in general, not to have any named individuals as fixed categories.

The other general issue to note is that we can't compare the %s of responses to the fixed categories to the %s for the open comment mentions. People are almost certainly less likely to write in something as a factor in the open comment than they would be to select it were it offered as a fixed choice, but on the other hand, things can appear in the open comments across multiple categories, so there's really no way to compare numbers fairly. That said, we can certainly say that since he's mentioned >200 times, the lower bound on the number of people who first heard of EA from Peter Singer is very high.

Comment by david_moss on EA Survey 2019 Series: How EAs Get Involved in EA · 2020-05-23T09:31:48.250Z · score: 2 (1 votes) · EA · GW

Thanks. That makes sense. I try not to change the historic categories too much though, since it messes up comparisons across years.

Comment by david_moss on EA Survey 2019 Series: How EAs Get Involved in EA · 2020-05-22T21:30:02.842Z · score: 9 (5 votes) · EA · GW

I think it's fair to say (as I did) that LessWrong is often thought of as "primarily" online, and, given that, I think it's understandable to find it surprising that these are the second most commonly mentioned way people hear about EA within the LessWrong category (I would expect more comments mentioning SlateStarCodex and other rationalist blogs for example). I didn't say that "surprising that people mention LessWrong meetups" tout court. I would expect many people, even among those who are familiar with LessWrong meetups, to be surprised at how often they were mentioned, though I could be mistaken about that.

(That said, a banal explanation might be that those who heard about EA just straightforwardly through the LessWrong forum, without any further detail, were less likely to write anything codable in the open comment box, compared to those who were specifically influenced by an event or HPMOR)

Comment by david_moss on EA Survey 2019 Series: How EAs Get Involved in EA · 2020-05-22T18:36:42.518Z · score: 23 (7 votes) · EA · GW

Thanks Jonas!

You can see the total EAs (estimated from year first heard) and the annual growth rate here:

As you suggest, this will likely over-estimate growth due to greater numbers of EAs from earlier cohorts having dropped out.

Comment by david_moss on Applying speciesism to wild-animal suffering · 2020-05-18T12:45:40.501Z · score: 3 (2 votes) · EA · GW

I occasionally see people make this kind of argument in the case of children, based on similar arguments for autonomy (see youth rights), though I agree that more people seem to find the argument that we should intervene convincing in the case of young children (that said, from the perspective of the activist who holds this view, this just seems like inappropriate discrimination).

Comment by david_moss on Applying speciesism to wild-animal suffering · 2020-05-18T08:52:22.923Z · score: 4 (3 votes) · EA · GW

It seems worth noting that some people also make the argument that it is x-ist to "think we have the right to intervene in the lives of" x oppressed group. As such, they probably won't be convinced by the analogy (though I agree that some people do think that we should intervene in human cases relevantly similar to wild animal suffering cases and so will be convinced).

Comment by david_moss on 2019 Ethnic Diversity Community Survey · 2020-05-13T13:34:23.918Z · score: 11 (4 votes) · EA · GW

Thanks Jonas! We'll be discussing this in more detail in our forthcoming post on EA Engagement levels.

Comment by david_moss on 2019 Ethnic Diversity Community Survey · 2020-05-12T16:04:25.920Z · score: 1 (1 votes) · EA · GW

Thanks Vaidehi. I agree that this is still useful information, I was simply responding to your direct comparison to the EA Survey ("The survey seems to have achieved this goal [solicit the experiences of people from ethnic minorities in EA] compared to the annual EA survey, a much higher proportion of respondents to this survey were non-white.").

By the way if you have specific questions that you would like us to include in the EA Survey please let us know (though no hurry).

Comment by david_moss on New data suggests the ‘leaders’’ priorities represent the core of the community · 2020-05-12T11:47:36.631Z · score: 43 (14 votes) · EA · GW

Fortunately we have data on this (including data on different engagement levels using EA Forum as a proxy) going back to 2017 (before that the cause question had a multi-select format that doesn't allow for easy comparison to these results).

If we look at the full sample over time using the same categories, we can see that there's been a tendency towards increased support for long-termist causes overall and a decline in support for Global Poverty (though support for Poverty remains >50% higher than for AI. The "Other near term" trend goes in the opposite direction, but this is largely because this category combines Climate Change and Mental Health and we only added Mental Health to the EAS in 2018.

Looking at EA Forum members only (a highly engaged ~20% of the EAS sample), we can see that there's been a slight trend towards more long-termism over time, though this trend is not so immediately obvious to see since between 2018 and 2019 EAs in this sample seem to have switched between AI and other long-termist causes. But on the whole the EA Forum subset has been more stable in its views (and closer to the LF allocation) over time.

Of course, it is not immediately obvious what we should conclude from this about dropout (or decreasing engagement) in non-longtermist people. We do know that many people have been switching into long-termist causes (and especially AI) over time (see below). But it's quite possible that non-longtermists have been dropping out of EA over a longer time frame (pre-2017). That said, I do think that the EA Forum proxy for engagement is probably more robust to these kinds of effects than the self-reported (1-5) engagement level, since although people might dropout of Forum membership due to disproportionately longtermist discussion, the Forum still has at least a measure of cause diversity, whereas facets of the engagement scale (such as EA org employement and EA Global attendance) are more directly filtering on long-termism. We will address data about people decreasing engagement or dropping our of EA due to perceiving EA as prioritizing certain causes too heavily in a forthcoming EA Survey post.

Both images from the EA Survey 2019: Cause Prioritization

Comment by david_moss on 2019 Ethnic Diversity Community Survey · 2020-05-12T08:53:40.517Z · score: 34 (9 votes) · EA · GW

Thanks for the post!

Just a comment on the reference to the EA Survey numbers. As I discussed here, because the EA Survey's question about race/ethnicity was multi-select, the percentages of respondents selecting each category can't be straightforwardly converted into percentages "identif[ying] with non-white race or ethnicity." We used multi-select to allow people to indicate complex plural identities, without forcing people to select more fixed categories, but it doesn't allow for particularly simple bench-marking if you want a binary white/non-white distinction. In the next survey we'll consider adding a further question with more fixed options. It's more accurate to describe our data as showing that 13.1% of respondents did not indicate white identity at all, 80.5% exclusively selected white and a further 6.4% selected both white and other identities. Unfortunately, interpreting this last category in terms of an interest in a white/non-white binary is fraught, since it's unclear whether these individuals would identify as "mixed race", white, non-white or a "person of colour." Of note, despite Asian being the most common identity other than white selected for this question, the most common selection within this 'mixed' category, was White and Hispanic (and the relationship between Hispanic identity and ethnicity/race is not straightforward.

As such, in a more expensive sense, the total "non-white" percentage may be higher, up to around 20%.

Regarding the broader claim that: "The goal of the survey was to solicit the experiences of people from ethnic minorities in EA. The survey seems to have achieved this goal compared to the annual EA survey, a much higher proportion of respondents to this survey were non-white."

I agree the percentages of non-white/white respondents is a bit higher in the dedicated "Ethnic Diversity" survey, but you had around 10x fewer ethnically diverse respondents expressing views overall, so this is not a clear win. The percentage difference could be explained entirely by white EAs thinking "This survey isn't really for me." A survey specifically about ethnic diversity seems particularly likely to skew towards respondents (both white and non-white) with a particular interest in the topic too, which is probably of particular significance when we're dealing with only around 30-36 respondents. That said, I agree this is an important source of more qualitative data than we could gather with the EA Survey!

Comment by david_moss on Racial Demographics at Longtermist Organizations · 2020-05-01T18:23:27.545Z · score: 23 (10 votes) · EA · GW

I calculated the percentage of POC by adding all the responses other than white, rather than taking 1 - % of white respondents... Thinking more about this in response to your question, it’d probably be more accurate to adjust my number by dividing by the sum of total responses (107%).

Yeh, as you note, this won't work given multiple responses across more than 2 categories.

I can confirm that if you look at the raw data, our sample was 13.1% non-mixed non-white, 6.4% mixed , 80.5% non-mixed white. That said, it seems somewhat risky to compare this to numbers "based on the pictures displayed on the relevant team page", since it seems like this will inevitably under-count mixed race people who appear white.

Comment by david_moss on Racial Demographics at Longtermist Organizations · 2020-05-01T17:36:25.843Z · score: 21 (11 votes) · EA · GW

Thanks for your post!

Unless I am missing something about your numbers, I think the figures you have from the EA Survey might be incorrect. The 2019 EA Survey was 13% non-white (which is within 0.1% of the figure you find for longtermist orgs).

It seems possible that, although you've linked to the 2019 survey, you were looking at the figures for the 2018 Survey. On the face of it, this looks like EA is 78% white (and so, you might think, 22% non-white), but those figures don't account for people who answered the question but who declined to specify a race. Once that is accounted for the non-white figures are roughly 13% for 2018 as well.

Comment by david_moss on Why I'm Not Vegan · 2020-04-10T18:08:50.049Z · score: 23 (11 votes) · EA · GW

We actually have some survey data on how the broader non-EA population thinks about moral tradeoffs between humans and non-human animals.

SlateStarCodex reported the results of a survey we (at Rethink Priorities) ran attempting to replicate a survey he and another commenter ran, asking people what number of animals of different species are of the same moral value as one adult human i.e. higher numbers means animals have lower value. Our writeup, which goes into a lot more detail about the interpretation and limitations of this data is forthcoming.

If you look at the column for 'Rethink Priorities (inclusive)' (which I think is the most relevant), you'll see the median values given were:

  • Pigs: 75
  • Chickens: 1000
  • Cows: 75

Your numbers mostly ascribe lower value to non-human animals than the median in our sample (an online US-only sample from Prolific). Of course, the question we asked was for a pure comparison of moral value, not adjusted for how how bad the conditions are that each species face in factory farms. But I would have thought that this should mean that the answers given to your question would be lower rather than higher. It would be interesting to know roughly what your pure moral value tradeoffs would be, if you have them.

Comment by david_moss on Why not give 90%? · 2020-03-26T13:25:00.882Z · score: 4 (4 votes) · EA · GW

There is detailed discussion of some closely related issues in this book chapter in the Effective Altruism: Philosophical Issues book edited by Hilary Graves and Theron Pummer. The author discusses these in less detail in this post on PEA Soup.

I also ran a small survey to test effective altruists' views on the thought experiments discusses. I haven't gotten around to writing it up, due to more pressing tasks. I could also share the survey again here, if people are particularly interested.

Comment by david_moss on Poll - what research questions do you want me to investigate while I'm in Africa? · 2020-03-03T10:53:36.780Z · score: 3 (4 votes) · EA · GW

I don't think this response makes much sense. Many of the questions listed are of very niche EA interest. For example, the number of researchers in the whole world looking at Wild Animal Suffering (through an EA lens) is surely in the 10s. The number of these who are specifically on the ground in Rwanda making notes on the experiences of wildlife, it should go without saying, close to zero. Of course, there are many zoologists in the world, but as EA WAW researchers often find, it is hard to apply much of this to research that is interested in welfare specifically.

The same goes for site visits to factory farms. First hand information about actual conditions on factory farms is notoriously hard to come by and many EA discussions have noted that we lack information about conditions as they may vary across other parts of the world. It would be very surprising if there were a plethora of animal welfare first hand case studies of conditions in farms across different parts of Africa that we haven't noticed before.

Most of the rest of the questions just seem to involve speaking to locals about their perspectives while in different parts of Africa. While I agree that there is, of course, already qualitative research somewhat related to many of these questions, it's hard to see the rationale for not speaking to Africans about their perspectives and only reading qualitative reports second hand from the developed world.

Comment by david_moss on How much will local/university groups benefit from targeted EA content creation? · 2020-02-22T12:22:19.886Z · score: 1 (1 votes) · EA · GW

There was no way to ask whether people knew about all the resources that currently existed (although in the next survey we could ask whether they know about the EA Hub's resources specifically). We do know from other questions in this survey and in 2017's that many group leaders are not aware of existing services in general though.

Comment by david_moss on How much will local/university groups benefit from targeted EA content creation? · 2020-02-20T09:04:34.532Z · score: 7 (5 votes) · EA · GW

The 2019 Local Group Organizers Survey found large percentages of organizers reporting that more "written resources on how to run to run a group" and "written resources on EA thinking and concepts" would be highly useful.

Comment by david_moss on Thoughts on electoral reform · 2020-02-18T19:53:52.862Z · score: 19 (11 votes) · EA · GW

It's great to see more reflection about approval voting and possible alternatives. I think the EA community should probably favour a lot more research into these alternatives before it invests resources in promoting any of these options.

Excessive political polarisation, especially party polarisation in the US, makes it harder to reach consensus or a fair compromise, and undermines trust in public institutions. Efforts to avoid harmful long-term dynamics, and to strengthen democratic governance, are therefore of interest to effective altruists.

I will note that many political theorists think that reducing polarisation and increasing consensus should not be our goals in democracy and need not be positive things e.g. agonistic theorists. This is especially so when, increasing consensus and compromise solutions is identified with "moderate" or centrist (which, as you note, could be construed as a bias).

Comment by david_moss on How do you feel about the main EA facebook group? · 2020-02-13T10:11:16.073Z · score: 25 (12 votes) · EA · GW

I agree that the main EA Facebook group has many low quality comments which "do not meet the bar for intellectual quality or epistemic standards that we should have EA associated with." That said, it seems that one of the main reasons for this is that the Facebook group contains many more people with very low or tangential involvement with EA. I think we should be pretty cautious about more heavily moderating or trying to exclude the contributions of newer or less involved members

As an illustration: the 2018 EA Survey found >50% of respondents were members of the Facebook group, but only 20% (i.e. 1 in 5) were members of the Forum. Clearly the Facebook group has many more users who are even less engaged with EA, who don't take the EA Survey. The forthcoming 2019 results were fairly similar.

At the moment I think the EA Facebook group plays a fairly important role alongside the EA Forum (which only a small minority of EAs are involved with) in giving people newer to the community somewhere where they can express their views. Higher moderation of comments would probably add to the pervasive (we will discuss this in a future EA Survey post) sense that EA is exclusive and elitist.

I do think it's worth considering whether low quality discussion on the EA Facebook group will cause promising prospective EAs to 'bounce' i.e. see the low quality discussion, infer that EA is low quality and leave. The extent to which this happens is a tricky broader question, but I'm inclined to hope that it wouldn't be too frequent since readers can easily see the higher quality articles and numerous Forum posts linked on Facebook and I would also hope that most readers will know that online discussion on Facebook is often low quality and not update too heavily against EA on the basis of it.

It also seems worth bearing in mind that since most members of the Facebook group clearly don't make the decision to move over to participating in the EA Forum, that efforts to make the EA Facebook discussion more like the Forum, may just put off a large number of users.

Comment by david_moss on Is vegetarianism/veganism growing more partisan over time? · 2020-01-24T17:12:04.639Z · score: 5 (4 votes) · EA · GW

I think this is a good explanation of at least part of the phenomenon. As you note, where we do samples of the general population and only 5% of people report being vegetarian or vegan, then even a small number of lizardperson answering randomly, oddly or deliberately trolling could make up a large part of the 5%.

That said, I note that even in surveys which are deliberately solely targeting identified vegetarians or vegans (so 100% of people in the sample identified as vegetarian or vegan), large percentages then say that they eat some meat. Rethink Priorities has an unpublished survey (report forthcoming soon) which sampled exclusively people who have previously identified as vegetarian or vegan (and then asked them again in the survey whether they identified as vegetarian or vegan) and we found just over 25% of those who answered affirmatively to the latter question still seemed to indicate that they consumed some meat product in a food frequency questionnaire. So that suggests to me that there's likely something more systematic going on, where some reasonably large percentage of people identify as vegetarian or vegan despite eating meat (e.g. because they eat meat very infrequently and think that's close enough). Of course, it's also possible that the first sampling to find self-identified vegetarian or vegans sampled a lot of lizardpersons, meaning that there was a disproportionate number of lizardpersons in the second sampling, meaning that there was a disproportionate number of lizardpersons who then identified as vegetarian or vegan in our survey. And perhaps lizardpersons don't just answer randomly but are disproportionately likely to identify as vegetarian or vegan when asked, which might also contribute.

Comment by david_moss on EA Survey 2019 Series: Geographic Distribution of EAs · 2020-01-23T17:12:59.462Z · score: 13 (4 votes) · EA · GW

I don't think that really explains the observed pattern that well.

I agree that in general, people not appearing in the EA Survey could be explained either by them dropping out of EA or them just not taking the EA Survey. But in this case, what we want to explain is the appearance of a disproportionate number of people who took the EA Survey in 2018, not taking the EA Survey in 2019, among the most recent cohorts of EAs who took the EA Survey in 2018 (2015-2017) compared to earlier cohorts (who have been in EA longer).

The explanation that this is due to EAs disproportionately drop out during their first 3 years seems to make straightforward intuitive sense.

The explanation the people who took the EA Survey in 2018 and joined within 2015-2017 specifically, were disproportionately less likely to take the EA Survey in 2019 seems less straightforward. Presumably the thought is that these people might have taken the EA Survey once, realised it was too long or something, and decided to not take it in 2019, whereas people who joined in earlier years have already taken the EA Survey and so are less likely to drop out of taking it, if they haven't already done so? I don't think that fits the data particularly well. Respondents from the 2015 cohort, would have had opportunities to take the survey at least 3 times, including 2018, before stopping in 2019, so it's hard to see why they specifically would be less likely to stop taking the EA Survey in 2019 compared to earlier EAs. Conversely EAs from before 2015 all the way back to 2009 or earlier, had at most 1 extra opportunity to be exposed to the EA Survey (we started in 2014), so it's hard to see why these EAs would be less likely to stop taking the EA Survey in 2019 having taken it in 2018.


In general, I expect the observation may have more than one explanation, including just random noise, but I think higher rates of dropout among particular more recent cohorts makes sense as an explanation, whereas these people specifically being more likely to take the EA Survey in 2018 and not in 2019 doesn't really.

Comment by david_moss on Growth and the case against randomista development · 2020-01-20T10:00:45.573Z · score: 3 (3 votes) · EA · GW

That's certainly true. I don't know exactly what they had in mind when they claimed that "most seem to be long-termists in some broad sense," but the 2019 survey at least has data directly on that question, whereas 2018 just has the best approximation we could give, by combining respondents who selected any of the specific causes that seemed broadly long-termist and Long Term Future lost out to Global Poverty using that method in both 2018 and 2019.*

*As noted in the posts, that method depends on the controversial question of what fine-grained causes should be counted as part of the 'Long Term Future' group. If Climate Change (the 2nd most popular cause in 2019, 3rd in 2018) were counted as part of LTF, then LTF would win by a mile. However, I am sceptical that most Climate Change respondents in our samples count as LTF in the relevant (EA) sense. i.e. normal (non-EA) climate change supporters who have no familiarity with LTF reasoning and think we need to be sustainable and think about the world 100 years or more in advance, seem quite different from long-termist EA (it seems they don't and generally would not endorse LTF reasoning about other areas). An argument against this is that that we see from the 2019 analysis, that people who selected Climate Change as a specific cause predominantly broke in favour of LTF when asked to select a broader cause area. I'm not sure how dispositive that is though. It seems likely to me that people who most support a specific cause other than Global Poverty (or Animals or Meta) would probably be more to select a broader, vaguer cause category, which their preferred cause could plausibly fit into (as Climate Change does into 'long term future/existential risk'), than one of the other specific causes, and as noted above, people might like the vague category of concern for the 'long term future' without actually supporting LTF the EA cause area. Some evidence for this comes from the other analyses in 2018 and 2019 which found that respondents who supported Climate Change were quite dissimilar from those who supported LTF causes in almost all respects (e.g. they tended to be newer to EA- very heavily skewed towards the most recent years- and less engaged with EA, generally following the same trends as Global Poverty and the opposite to AI, see here).

Comment by david_moss on Growth and the case against randomista development · 2020-01-16T11:16:35.915Z · score: 11 (8 votes) · EA · GW

Which cause is most popular depends on cause categorisation and most surveyed EAs seem to be long-termists in some broad sense. EA Survey 2018 Series: Cause Selection"

This is clearly fairly tangential to the main point of your post, but since you mention it, the more recent EA Survey 2019: Cause Prioritization post offers clearer evidence for your claim that most surveyed EAs seem to be long-termists, as 40.08% selected the 'Long Term Future / Catastrophic and Existential Risk Reduction' (versus 32.3% selecting Global Poverty) when presented with just 4 broad EA cause areas. That said, the claim in the main body of your text that "Global poverty remains a popular cause area among people interested in EA" is also clearly true, since Global Poverty was the highest rated and most often selected 'top cause' among the more fine-grained cause areas (22%).

Comment by david_moss on The Labour leadership election: a high leverage, time-limited opportunity for impact (*1 week left to register for a vote*) · 2020-01-15T10:21:41.829Z · score: 10 (7 votes) · EA · GW

I have to wonder whether EAs voting on the Labour leadership is positive in expectation. A priori, I would have expected it would be, but to my surprise, the EAs I know personally whose views on Labour politics I also know have not (in my view) had generally better views, been more thoughtful or more informed than the average Labour party members (I have been a Labour party member for some years). Nor have their substantive views seemed better to me, though of course this is more controversial (and this fact leads me to reduce my confidence in my own views considerably). Notably, the above is drawing from a reference class of people who were already quite engaged with Labour politics, things may be different (and perhaps worse) for the class of EAs who were not Labour party members, but who were persuaded their vote would be valuable by a forum post.

It also seems possible that votes by EAs generally being positive in expectation holds true for general elections, where choices are more stark and there is generally more consensus among EAs, and their votes are being compared against a wider reference class, but does not hold for more select votes about more nuanced issues, comparing against groups of relatively engaged and informed voters.

Comment by david_moss on [Link] Aiming for Moral Mediocrity | Eric Schwitzgebel · 2020-01-03T18:44:03.592Z · score: 5 (4 votes) · EA · GW

The first two sentences of his article "Aiming For Moral Mediocrity" are:

I have an empirical thesis and a normative thesis. The empirical thesis is: Most people aim to be morally mediocre. [I'm including this as a general reference for other readers, since you seem to have read the article yourself]

I take the fact that people systematically evaluate themselves as being significantly (morally) better than average, as strong evidence against the claim that people are aiming to be morally mediocre. If people systematically believed themselves to be better than average and were aiming for mediocrity, then they could (and would) save themselves effort and reduce their moral behaviour until they no longer thought themselves to be above average.

Note that the evidence Schwitzgebel cites for his empirical thesis doesn't show that "People behave morally mediocre" any more than it shows that people aim to be morally mediocre: it shows people's behaviour goes up or down when you tell them that a reference class is behaving much better or worse, but not that most people's behaviour is anywhere near the mediocre reference point. For example, in Cialdini et al (2006), 5% of people took wood from a forest when told that "the vast majority of people did not" and 7.92% did when told that "many past visitors" had (which was not a significant difference, as it happened). Unfortunately, the reference points "vast majority" and "many" are vague, but it doesn't suggest that most people are behaving anywhere near the mediocre reference point.

I recognise that Schwitzgebel acknowledges this "gap" between his evidence and his thesis in section 4, but I think he fails to appreciate that extent of the gap (near total) or that the evidence he cites can actually be seen as evidence against his thesis if we infer on the basis of these results that most people don't seem to be acting in line with the mediocre reference point.


In the "aiming for a B+" section you cite he actually seems to shift quite a bit to be more in line with my claim.

Here he suggests that "B+ probably isn’t low enough to be mediocre, exactly. B+ is good. It’s just not excellent. Maybe, really, instead of aiming for mediocrity, most people aim for something like B+ – a bit above mediocre, but shy of excellent." This is in line with my claim, that people take themselves to be above average morally and aim to keep sailing along at that level, but quite different from his claim previously that people "calibrate toward approximately the moral middle" and aim to be "so-so."

He reconciles this with the claim that people think of themselves and aim for above average (and "good") "most people who think they are aiming for B+ are in fact aiming lower." His passage doesn't make entirely clear what he means by that.

In the first instance he seems to suggest that people's beliefs are just mistaken about where they are really aiming (he gives the example of a student who professes to aim for a B+, but won't work harder if they get a C). But I don't see any reason to think that people are systematically mistaken about what moral standard they are really aiming at.

However, in a later passage he says "when I say that people aim for mediocrity, I mean not that they aim for mediocrity-by-their-own rationalized-self-flattering-standards. I mean that they are calibrating toward what is actually mediocre." Elsewhere he also says "It is also important here to use objective moral standards rather than people’s own moral standards." It's slightly unclear to me whether he means to refer to what is mediocre according to objective descriptive standards of how people actually behave, or according to objective normative standards i.e. what (Schwitzgebel thinks) is actually morally mediocre. If it's the former, we are back to the claim that although people think they are morally good and think they are aiming for morally good behaviour (according to their standards), they actually aim their behaviour towards median behaviour in their reference class (which I don't think we have any evidence for). If it's the latter then it's just the claim that the level of behaviour that most people actually end up approximating is mediocre (according to Schwitzgebel), which isn't a very interesting thesis to me.

Comment by david_moss on The Center for Election Science Year End EA Appeal · 2020-01-03T08:36:44.135Z · score: 1 (1 votes) · EA · GW

Thanks for the clarification, I strongly agree with the position described in this comment.

Comment by david_moss on [Link] Aiming for Moral Mediocrity | Eric Schwitzgebel · 2020-01-03T08:29:53.879Z · score: 6 (5 votes) · EA · GW

The evidence cited (people's behaviour is influenced by the behaviour of their peers) doesn't offer any evidence in favour of the "moral mediocrity thesis" (people aim to be morally mediocre).

I find the "slightly better than average" thesis more likely: people regard themselves as better than average morally (as they do in other domains, but even more strongly). And this view has actual empirical support e.g. Tappin and McKay (2017).

Comment by david_moss on EA Survey 2018 Series: Donation Data · 2020-01-01T18:19:44.999Z · score: 1 (1 votes) · EA · GW

Thanks for asking. Unfortunately, there weren't any climate change specific charities among the 15 specific charities which we included as default options for people to write in their donation amounts. That said, among the "Other" write-in option, there were 42/474 (8.86%) mentions of Cool Earth, so that was clearly a popular choice. There were no other frequently mentioned Climate Change charities.

As it happens, people who selected Climate Change as their top cause area also donated substantially less (median $358).

Comment by david_moss on The Center for Election Science Year End EA Appeal · 2019-12-31T07:42:45.607Z · score: 4 (3 votes) · EA · GW

On future generations, I favor thinking about possible institutional reforms which directly incentivize greater regard for future generations.

I'm curious why you say this given that you earlier noted the problem that a large part of the electorate "are in all likelihood systematically mistaken about the sort of policies that would advance their interests." Making people give more regard to future generations seems to be of extremely unclear value if they are likely to be systematically mistaken about what would serve the interests of future generations. This seems like a consideration in favour of interventions which aim to improve the quality of decision-making (e.g. via deliberative democracy initiatives) vs those which try to directly make people's decisions more about the far future (although of course, these needn't be done in isolation). But perhaps I am simply misunderstanding what you mean by "directly incentivize greater regard for future generations"?

Comment by david_moss on We're Rethink Priorities. AMA. · 2019-12-18T18:53:58.386Z · score: 4 (3 votes) · EA · GW

It sounds like you're thinking mostly about the animal sentience research, where I know there has been a lot of engagement with outside academic experts, but fwiw the empirical studies I work on also received a lot of external review from academics. They are very overlapping in method and content with my academic work (indeed, one of these projects is an academic collaboration and the other was an academic project I had been working on previously, that I decided made more sense to do under Rethink Priorities) and also a lot of the researchers in the EAA have backgrounds as academic researchers, so it's quite easy to find relevant expertise.

Comment by david_moss on Interaction Effect · 2019-12-16T16:49:28.724Z · score: 21 (8 votes) · EA · GW

Let's say someone goes into strategic AI research at the Future of Humanity Institute because this is proposed to be one of the most impactful career paths there is. In aiming for that career this person relied on the labour of several teachers. When the researcher is sick, they rely of the labour of doctors...

This doesn't seem to pose so much of a problem if you are trying to rank what is most valuable on the margin. Suppose every human activity is dependent on having at least one doctor and at least one farmer producing food, such that these are completely necessary for any other job to take place. It doesn't follow that we couldn't determine which job it would be most valuable to have one additional person working in. For example, if we already have enough doctors or farmers, even if these jobs are entirely necessary, we could still say that it is more valuable for a further person to work in a different field.

I think you've basically captured this with your artist example, although it's worth noting explicitly that how important art is on average, is different from its value on the margin, i.e. we could think that art or being a doctor or whatever, is the single most valuable human activity (on average) and still think that it would be more important for a particular person to go and work in another activity.