Posts

EA Survey Series 2019: EA cities and the cost of living 2020-07-06T08:39:45.572Z · score: 46 (22 votes)
EA Survey Series 2019: How many EAs live in the main EA hubs? 2020-07-06T08:39:22.777Z · score: 45 (14 votes)
EA Survey 2019 Series: How many people are there in the EA community? 2020-06-26T09:34:11.051Z · score: 71 (30 votes)
Fewer but poorer: Benevolent partiality in prosocial preferences 2020-06-16T07:54:04.065Z · score: 36 (21 votes)
EA Survey 2019 Series: Community Information 2020-06-10T16:26:59.250Z · score: 84 (27 votes)
EA Survey 2019 Series: How EAs Get Involved in EA 2020-05-21T16:28:12.079Z · score: 109 (38 votes)
Empathy and compassion toward other species decrease with evolutionary divergence time 2020-02-21T15:31:26.309Z · score: 37 (17 votes)
EA Survey 2018 Series: Where People First Hear About EA and Influences on Involvement 2019-02-11T06:05:05.829Z · score: 34 (17 votes)
EA Survey 2018 Series: Group Membership 2019-02-11T06:04:29.333Z · score: 34 (13 votes)
EA Survey 2018 Series: Cause Selection 2019-01-18T16:55:31.074Z · score: 69 (29 votes)
EA Survey 2018 Series: Donation Data 2018-12-09T03:58:43.529Z · score: 83 (38 votes)
EA Survey Series 2018 : How do people get involved in EA? 2018-11-18T00:06:12.136Z · score: 50 (28 votes)
What Activities Do Local Groups Run 2018-09-05T02:27:25.247Z · score: 23 (18 votes)

Comments

Comment by david_moss on Correlations Between Cause Prioritization and the Big Five Personality Traits · 2020-10-23T09:14:31.842Z · score: 3 (2 votes) · EA · GW

Thanks again for writing it! It's nudged me to go back and look at our data again when I have some time. I expect that we'll probably replicate at least some of your broad findings.

Comment by david_moss on Evidence on correlation between making less than parents and welfare/happiness? · 2020-10-12T19:48:39.703Z · score: 10 (7 votes) · EA · GW

There is research on the links between downward social mobility and happiness, however:


These empirical studies show little consensus when it comes to the consequences of intergenerational social mobility for SWB: while some authors suggest that upward mobility is beneficial for SWB (e.g. Nikolaev and Burns, 2014), others find no such relationship (e.g. Zang and de Graaf, 2016; Zhao et al., 2017). In a similar vein, some researchers suggest that downward mobility is negatively associated with SWB (e.g. Nikolaev and Burns, 2014), while others do not (e.g. Zang and de Graaf, 2016; Zhao et al., 2017)

This paper suggests that differences in culture may influence the connection between downward social mobility and happiness:
 

the United States is an archetypical example of a success-oriented society in which great emphasis is placed on individual accomplishments and achievement (Spence, 1985). The Scandinavian countries are characterized by more egalitarian values (Schwartz, 2006; Triandis, 1996, Triandis and Gelfand, 1998; see also Nelson and Shavitt, 1992)...A great cultural salience of success and achievement may make occupational success or failure more important markers for people’s SWB. 

And they claim to find this:

In line with a previous study from Nikolaev and Burns (2014) we found that downward social mobility is indeed associated with lower SWB in the United States. This finding provides evidence for the “falling from grace hypothesis” which predicts that downward social mobility is harmful for people’s well-being. However, in Scandinavian Europe, no association between downward social mobility and SWB was found. This confirms our macro-level contextual hypothesis for downward social mobility: downward social mobility has greater consequences in the United States than in the Scandinavian countries.

This is, of course, just one study so not very conclusive.

Comment by david_moss on Correlations Between Cause Prioritization and the Big Five Personality Traits · 2020-10-01T14:10:10.865Z · score: 7 (5 votes) · EA · GW

Thus, intellectually curious people—those who are motivated to explore and reflect upon abstract ideas—are more inclined to judge the morality of behaviors according to the consequences they produce.

This is probably mentioned in the paper, but the Cognitive Reflection Test is also associated with utilitarianism, Need for Cognition is associated with utilitarianism, Actively Open-minded Thinking is associated with utilitarianism and numeracy is associated with utilitarianism. Note that I don't endorse all of these papers' conclusions (for one thing, some are using the simple 'trolley paradigm' which I think likely isn't capturing utilitarianism very well).

Notably in the EA Survey we measured Need for Cognition  respondents scored ludicrously highly, with the maximum response for each item being the modal response. 

Comment by david_moss on What actually is the argument for effective altruism? · 2020-09-28T14:51:51.847Z · score: 2 (1 votes) · EA · GW

I agree that if the first two premises were true, but the third were false, then EA would still be important in a sense, it's just that everyone would already be doing EA

Just to be clear, this is only a small part of my concern about it sounding like EA relies on assuming (and/or that EAs actually do assume) that the things which are high impact are not the things people typically already do.

One way this premise could be false, other than everyone being an EA already, is if it turns out that the kinds of things people who want to contribute to the common good typically do are actually the highest impact ways of contributing to the common good. i.e. we investigate, as effective altruists and it turns out that the kinds of things people typically do to contribute to the common good are (the) high(est) impact. [^1]

To the non-EA reader, it likely wouldn't seem too unlikely that the kinds of things they typically do are actually high impact.  So it may seem peculiar and unappealing for EAs to just assume [^2] that the kinds of things people typically do are not high impact.

[^1] A priori, one might think there are some reasons to presume in favour of this (and so against the EA premise), i.e. James Scott type reasons, deference to common opinion etc.

[^2] As noted, I don't think you actually do think that EAs should assume this, but labelling it as a "premise" in the "rigorous argument for EA" certainly risks giving that impression.

Comment by david_moss on Nathan Young's Shortform · 2020-09-28T09:02:28.979Z · score: 3 (2 votes) · EA · GW

This is true, although for whatever reason the responses to the podcast question seemed very heavily dominated by references to MacAskill. 

This is the graph from our original post, showing every commonly mentioned category, not just the host (categories are not mutually exclusive). I'm not sure what explains why MacAskill really heavily dominated the Podcast category, while Singer heavily dominated the TED Talk category.

Comment by david_moss on What actually is the argument for effective altruism? · 2020-09-27T08:35:53.853Z · score: 18 (12 votes) · EA · GW

Novelty: The high-impact actions we can find are not the same as what people who want to contribute to the common good typically do.

 

It's not entirely clear to me what this means (specifically what work the "can" is doing). 

If you mean that it could be the case that we find high impact actions which we not the same are what people who want to contribute to the good would typically do,  then I agree this seems plausible as a premise for engaging in the project of effective altruism.

If you mean that the premise is that we actually can find high impact actions which are not the same as what people who want to contribute to the common good typically do, then it's not so clear to me that this should be a premise in the argument for effective altruism. This sounds like we are assuming what the results of our effective altruist efforts to search for the actions that do the most to contribute to the common good (relative to their cost) will be: that the things we discover are high impact will be different from what people typically do. But, of course, it could turn out to be the case that actually the highest impact actions are those which people typically do (our investigations could turn out to vindicate common sense, after all), so it doesn't seem like this is something we should take as a premise for effective altruism. It also seems in tension with the idea (which I think is worth preserving) that effective altruism is a question (i.e. effective altruism itself doesn't assume that particular kinds of things are or are not high impact).

I assume, however, that you don't actually mean to state that effective altruists should assume this latter thing to be true or that one needs to assume this in order to support effective altruism. I'm presuming that you instead mean something like: this needs to be true for engaging in effective altruism to be successful/interesting/worthwhile. In line with this interpretation, you note in the interview something that I was going to raise as another objection: that if everyone were already acting in an effective altruist way, then it would be likely false that the high impact things we discover are different from those that people typically do.

If so, then it may not be false to say that "The high-impact actions we can find are not the same as what people who want to contribute to the common good typically do", but it seems bound to lead to confusion, with people misreading this as EAs assuming that he highest impact things are not what people typically do. It's also not clear that this premise needs to be true for the project of effective altruism to be worthwhile and, indeed, a thing people should do: it seems like it could be the case that people who want to contribute to the common good should engage in the project of effective altruism simply because it could be the case that the highest impact actions are not those which people would typically do.

Comment by david_moss on Nathan Young's Shortform · 2020-09-26T09:24:01.380Z · score: 16 (6 votes) · EA · GW

This seems quite likely given EA Survey data where, amongst people who indicated they first heard of EA from a Podcast and indicated which podcast, Sam Harris' strongly dominated all other podcasts.

More speculatively, we might try to compare these numbers to people hearing about EA from other categories. For example, by any measure, the number of people in the EA Survey who first heard about EA from Sam Harris' podcast specifically is several times the number who heard about EA from Vox's Future Perfect. As a lower bound, 4x more people specifically mentioned Sam Harris in their comment than selected Future Perfect, but this is probably dramatically undercounting Harris, since not everyone who selected Podcast wrote a comment that could be identified with a specific podcast. Unfortunately, I don't know the relative audience size of Future Perfect posts vs Sam Harris' EA podcasts specifically, but that could be used to give a rough sense of how well the different audiences respond.

Comment by david_moss on Thomas Kwa's Shortform · 2020-09-25T14:15:03.092Z · score: 25 (7 votes) · EA · GW

Thanks for writing this.

I also agree that research into how laypeople actually think about morality is probably a very important input into our moral thinking. I mentioned some reasons for this in this post for example. This project on descriptive population ethics also outlines the case for this kind of descriptive research. If we take moral uncertainty and epistemic modesty/outside-view thinking seriously, and if on the normative level we think respecting people's moral beliefs is valuable either intrinsicaially or instrumentally, then this sort of research seems entirely vital.

I also agree that incorporating this data into our considered moral judgements requires a stage of theoretical normative reflection, not merely "naively deferring" to whatever people in aggregate actually believe and that we should probably go back and forth between these stages to bring our judgements into reflective equillibrium (or some such).

That said, it seems like what you are proposing is less a project and more an enormous research agenda spanning several fields of research, a lot of which is ongoing across multiple disciplines, though much of it is in its early stages. For example, there is much work in moral psychology, which tries to understand what people believe, and why, at different levels, (influential paradigms include Haidt's Moral Foundations Theory, and Oliver Scott Curry's (Morality as Cooperation / Moral Molecules theory), a whole new field of sociology of morality (see also here) , anthropology of morality is a whole long-standing field, and experimental philosophy has just started to seek to empirically examine how people think about morality too. 

Unfortunately, I think our understanding of folk morality remains exceptionally unclear and in its very early stages. For example, despite a much touted "new synthesis" between different disciplines and approaches, there remains much distance between different approaches, to the extent that people in psychology, sociology and anthropology are barely investigating the same questions >90% of the time. Similarly, experimental philosophy of morality seems utterly crippled by validity issues (see my recent paper with Lance Bush here) . There is also, I have argued, a necessity to also gather qualitative data, in part due to the limitations with survey methodology for understanding people's moral views, which experimental philosophy and most psychology have essentially not started to do at all.  

I would also note that there already cross-cultural moral research on various questions, but this is usually limited to fairly narrow paradigms: for example, aside from those I mentioned above, the World Values Survey's focus on Traditional/Secular-Rational and Survival/Self-expressive values; research on the trolley problem (which also dominates the rest of moral psychology), or the Schwartz Values Survey. So these lines of research doesn't really give us insight into people's moral thinking in different cultures as a whole.

I think the complexity and ambition involved in measuring folk morality becomes even clearer when we consider what is involved in studying specific moral issues. For example, see Jason Schukraft's discussion of how we might investigate how much moral weight the folk ascribe to the experiences of animals of different species.

There are lots of other possible complications with cross-cultural moral research. For example, there is some anthropological evidence that the western concept of morality is idiosyncratic and does not overlap particularly neatly with other cultures, see here.

So I think, given this, the problem is not simply that it's "too expensive", as we might say of a really large survey, but that it would be a huge endeavour where we're not even really clear about much of the relevant theory and categories. Also training a significant number of EA anthropologists, who are competent in ethnography and the relevant moral philosophy would be quite a logistical challenge.

---

That said I think there are plenty of more tractable research projects that one could do roughly within this area. For example, more large scale representative surveys examining people's views and their predictors across a wider variety of issues relevant to effective altruism/prioritisation would be relatively easy to do with a budget of <$10,000, by existing EA researchers. This would also potentially contribute to understanding influences on the prioritisation of EAs, rather than just what non-EA things, which would also plausibly be valuable.

Comment by david_moss on Correlations Between Cause Prioritization and the Big Five Personality Traits · 2020-09-25T13:47:11.938Z · score: 2 (1 votes) · EA · GW

Thanks!

Comment by david_moss on Yale EA Virtual Fellowship Retrospective - Summer 2020 · 2020-09-25T11:18:59.509Z · score: 17 (3 votes) · EA · GW

Thanks for the post! This definitely isn't addressed at you specifically (I think this applies to all EA groups and orgs), so I hope this doesn't seem like unfairly singling you out over a very small part of your post, but I think EAs should stop calculating and reporting the 'NPS score' when they ask NPS or NPS-style questions. 

I assume you calculated the NPS score in the 'standard' way i.e. asking people “Would you recommend the Fellowship to a friend?” on a 0-10 or 1-10 scale, and subtracting the percentage of people who answered with a 6 or lower ("Detractors") from the percentage of people who answered with a 9 or 10 "Promoters"). The claim behind the NPS system is that people who give responses within these ranges are qualitatively different 'clusters' (and also the people responding with a 7-8 are also a distinct cluster "Passives" who basically don't matter and so who don't figure in the NPS scores at all) and that just subtracting the percentages of one cluster from another is the "easiest-to-understand, most effective summary of how a company [is] performing in this context."

Unfortunately, it does not seem to me that there's a sound empirical basis for analysing an NPS style scale in this way (and the company behind it are quite untransparent about this basis (see discussion here).  This way of analysing responses to a scale is pretty unusual and obscures most of the information about the distribution of responses, which it seems like it would be pretty easy for an EA audience to understand.  For example, it seems like it would be pretty easy to depict the distribution of responses, as we did in the EA Survey Community information post.

And it seems like calculating the mean and median response would also give a more informative, but equally easy to understand summary of performance on this measure (more so than the NPS score, which for example, completely ignores whether people respond with a 0 or a 6). This would also allow easy significance testing of the differences between events/groups.

Comment by david_moss on Correlations Between Cause Prioritization and the Big Five Personality Traits · 2020-09-25T09:26:00.895Z · score: 54 (18 votes) · EA · GW

Thanks a lot for looking at this! I'm also fascinated by possible links between Big 5 and other  personality/psychometric measures, and EA related judgements, and think this kind of analysis could be quite informative about EA thinking, and perhaps prioritisation in particular. To echo another commenter, I also like the graphs.

Unfortunately, we had to drop all of our psychometric measures last year due to space constraints, but I hope that we may be able to reintroduce them at some point.

I recall that when I looked into possible relationships between our Big 5 measures and cause prioritisation, treating the response categories as an ordinal scale, and found mostly null results. One reason might have been low reliability of some of the measures: we used an established short form measure of the Big 5, the Ten Item Personality Inventory, which often seems to face low reliability issues. 

It's interesting that you found such strong results looking at the difference between people who indicated that the cause should receive no resourcess versus those who indicated that it is the top priority. One thing worth bearing in mind is that these are very small proportions of the responses overall. Unfortunately the Forum now formats our 2018 post weirdly, with all the images really small, but you can see the distribution here:

It seems possible to me that 'extreme responders' indicating that a cause should receive no resources vs or it's the top priority might be  a slightly unusual group overall (a priori, I might have expected 'no resources' respondents to be less agreeable and less conscientious). The results might also be influenced a little by the minority of respondents who selected multiple causes as "top cause", since these would disproportionately appear in your 'top cause' category.

It also seems likely that there might be confounders due to demographic differences which are associated with differences in Big 5. For example, there are big gender differences, which vary across the lifespan (see here), i.e. women are higher in Conscientiousness, Agreeableness, Openness, and Extraversion, although this varies at the aspect level (you can see the age x gender population norms for the TIPI specifically here and here), and we know that there were dramatic gender differences in cause prioritisation. 

For example, women were much more supportive of Climate Change than men in our survey, so if women are more Conscientious, this would plausibly show up as a difference in Conscientiousness scores between supporters and opponents of Climate Change as a cause.  So presumably you'd want to control for other differences in order to determine whether the difference in personality traits seems to predict the difference in cause prioritisation. Of course there could be other differences which are relevant, for example, I seem to recall that I found differences in the personality traits of LessWrong members vs non-LessWrong members (which could also be confounded by gender).

Comment by david_moss on Correlations Between Cause Prioritization and the Big Five Personality Traits · 2020-09-25T08:54:44.689Z · score: 5 (4 votes) · EA · GW

There are lots of different ways to control for multiple comparisons: https://en.wikipedia.org/wiki/Multiple_comparisons_problem#Controlling_procedures

Comment by david_moss on Denise_Melchin's Shortform · 2020-09-17T18:18:01.388Z · score: 8 (4 votes) · EA · GW

A similar analogy with the fossil fuel industry is mentioned by Stuart Russell (crediting Danny Hillis) here:

let’s say the fossil fuel industry as if it were an AI system. I think this is an interesting line of thought, because what he’s saying basically and — other people have said similar things — is that you should think of a corporation as if it’s an algorithm and it’s maximizing a poorly designed objective, which you might say is some discounted stream of quarterly profits or whatever. And it really is doing it in a way that’s oblivious to lots of other concerns of the human race. And it has outwitted the rest of the human race.

It also seems that  "things go really badly if you optimise straightforwardly for one goal" bears similarities to criticisms of central planning or utopianism in general though.

Comment by david_moss on Some thoughts on EA outreach to high schoolers · 2020-09-16T13:42:37.007Z · score: 21 (5 votes) · EA · GW

When we plotted average engagement against age first involved, the peak was at 20. People who first got involved at age 18 were less involved on average, and had a similar average level of engagement of people who first got involved at age 40. 

 

Just for the benefit of people who haven't seen the graph, we also split this by cohort (which year they first heard of EA) and there was no cohort for which the peak was younger than 20.

It's hard to know what to draw from this (younger and older people probably get less engaged because the community is less well set up for them)

I think the fact that we see this effect across cohorts is some evidence for age itself driving the effect. People who joined (when they were young) in earlier cohorts will be at least in their early 20s and maybe almost 30 by now. So you might think that they will now have been in EA during the ages which, ex hypothesi, the EA community is better set up for, and it seems like they are still, on average lower in self-reported engagement. Of course, it could also be that how well the EA community is set up for you when you first hear of it is really important, and so people who first hear about it earlier than university age never recover, but it's not clear to me what the mechanism would be there.

Of course, we are talking about a relatively small group of people who first hear about EA at these young ages: about 15% first heard of EA when they were younger than 20 (but that comfortably includes university age), but <5% first heard of EA when they were younger than 18 (and this is probably an over-estimate because age-first-heard is calculated from reported year when people first heard and their date of birth, so there's a bit of wiggle room as to exactly how old they were when they first heard.

Comment by david_moss on EA Survey Series 2019: How many EAs live in the main EA hubs? · 2020-09-03T09:38:08.097Z · score: 4 (2 votes) · EA · GW

Thanks for the comment Max. 

EA density is definitely something worth considering. We reported this for the main EA cities in the 2018 Geography article (graph below) and you can see the graph for 2019 below that.

Of course, as I discuss in the 'What counts as an EA Hub?' section, what characteristics matter in identifying a hub will depend on your practical purposes: as you say, density probably matters more if you are looking for somewhere to live and want to bump into random EAs on the street, but I would imagine less so if you are looking to found an organisation and want an accessible pool of people to hire from.

I imagine that if you are thinking about travel time and likelihood of bumping into people randomly, functional density is probably also higher in some of these cases, due to EA populations already being very localised within certain parts of cities.

Comment by david_moss on More empirical data on 'value drift' · 2020-09-03T09:23:43.433Z · score: 8 (5 votes) · EA · GW

Cause preference (i.e. prioritising different causes than the EA community or thinking that the EA community focused too much on particular causes and ignored others) was the second most commonly cited reason among people who reported declining interest in EA.

https://forum.effectivealtruism.org/posts/F6PavBeqTah9xu8e4/ea-survey-2019-series-community-information#Reasons_people_become_less_engaged
Comment by david_moss on More empirical data on 'value drift' · 2020-09-03T09:19:25.730Z · score: 11 (3 votes) · EA · GW

Yeh much of this is in our Community Information post where we:

  • asked an 'NPS' question about EA, asked for qualitative information about positive/negative experiences of the community and examined predictors
  • asked about barriers to becoming more involved in EA
  • asked about reasons for people's interest in EA declining or increasing
  • asked about what factors were important for retaining people in EA
  • asked about why people who the respondent knew dropped out

I'm pretty sceptical about the utility of Net Promoter Score in the classical sense for EA. I don't think there's any good evidence for the prescribed way of calculating Net Promoter Score (ignoring respondents who answer in the upper-middle of the scale, and then subtracting  the proportion of people who selected one of the bottom 7 response levels from the proportion who selected one of the top two response items). And, as I mentioned in our original post, its validity and predictive power has been questioned. Furthermore, one of the most common uses is comparing the NPS score of an entity to an industry benchmark (e.g. the average scores for other companies in the same industry), but it's very unclear what reference class would be relevant for EA, the community, as a whole, so it's fundamentally not clear whether EA's NPS score is good or bad. In the specific case of EA, I also suspect that the question of how excited one would be to recommend EA to a suitable friend may well be picking up on attitudes other than satisfaction with EA, i.e. literally how people would feel about recommending EA to someone. This might explain why the people with the highest 'NPS' scores (we just treated the measure as a straightforward ordinal varlable in our own analyses) were people who had just joined EA, and fairly reliably became lower over time.

Comment by david_moss on My Meta-Ethics and Possible Implications for EA · 2020-08-11T14:37:19.090Z · score: 3 (2 votes) · EA · GW

I’m afraid now the working week has begun again I’m not going to have so much time to continue responding, but thanks for the discussion.

I'm not sure if I know what you're talking about by 'impure things'. Sewage perhaps? I'm not sure what it means to have a moral aversion to sewage. Maybe you mean something like the aversion to the untouchable caste? I do not know enough about that to comment.

I’m thinking of the various things which fall under the Purity/Disgust (or Sanctity/Degradation) foundation in Haidt’s Moral Foundations Theory. This includes a lot of things related to not eating or otherwise exposing yourself to things which elicit disgust, as well as a lot of sexual morality. Rereading the law books of the Bible gives a lot of examples. The sheer prevalence of these concerns in ancient morality, especially as opposed to modern concerns like promoting positive feeling, is also quite telling IMO. For more on the distinctive role of disgust in morality see here or here.

Let me stress again that I do not make a distinction between universalizable preferences which are 'basic dispositions' and those which I refer to as meta-reactions. These should be treated on an equal footing.

I’m not sure how broadly you’re construing ‘meta-reactions’, i.e. would this include basically any moral view which a person might reach based on the ordinary operation of their intuitions and reason and would all of these be placed on an equal footing? If so then I’m inclined to agree, but then I don’t think this account implies anything much at the practical level (e.g. how we should think about animals, population ethics etc.).

I argue that what we do when disagreeing is emphasizing various parts of SMB to the other.

I may agree with this if, per my previous comment, SMB is construed very broadly i.e. to mean roughly emphasising or making salient shared moral views (of any kind) to each other and persuading people to adopt new moral views. (See Wittgenstein on conversion for discussion of the latter).

If we agree that SMB plays a crucial role in lending meaning to moral disagreement, then we can understand the nature of moral disagreement without appeal to any 'abstruse reasoning'... In this picture of moral language = universalizable preferences + elicit disapproval + SMB subset, where does abstruse reasoning enter the picture? It only enters when a philosopher sees a family resemblance between moral disagreement and other sorts of epistemological disagreement and thus feels the urge to bring in talk of abstruse reasoning.

I think this may be misconstruing my reference to “abstruse reasoning” in the claim that “It doesn’t seem to me like we have any particular reason to privilege these basic intuitive responses as foundational, in cases where they conflict with our more abstruse reasoning.” Note that I don’t say anything about abstruse reasoning being “necessary to understand the nature of moral disagreement.”

I have in mind cases of moral thinking, such as the example I gave where we override disgust responses based on reflecting that they aren’t actually morally valuable, (I think this would include cases like population ethics and judging that whether animals matter depends on whether they have the right kinds of capacities).

It now sounds like you might think that such reflections are on an “equal footing” with judgments that are more immediately related to basic intuitive responses, in which case there may be little or no remaining disagreement. There may be some residual disagreement if you think that such relatively rarefied reflections can’t count as meta-reflections/legitimate moral reasoning, but I don’t think that is the view which you are defending now. My sense is that more or less any moral argument could result from a process of people reflecting on their views and the views of others and seeking consistency, in which case it doesn’t seem to me like any line of moral argument is ruled out or called into question by your metaethical account. That is fine in my view since I think that it’s appropriate that philosophical reflections should ‘leave everything as it is.’

Comment by david_moss on CEA Mid-year update (2020) · 2020-08-11T10:54:51.982Z · score: 27 (8 votes) · EA · GW

Demographics:  According to Google Analytics, 41% of Forum viewers are female (higher than the proportion of community members who are female), but we believe* that males make up a higher proportion of authors than their proportion in the community...

*Many authors are anonymous, so we aren't certain of this.

 

For what it's worth EA Survey 2019 data suggests this too. Obviously classifying Forum posts directly gives a more comprehensive sample, but as you note has the issue of authors being of indeterminate gender. The chi square tests are, however, not significant for posting on the EA Forum (p=0.052) and writing about EA (not on the EA Forum) (p=0.057), while the others were p<0.001.

Comment by david_moss on My Meta-Ethics and Possible Implications for EA · 2020-08-09T18:55:51.418Z · score: 2 (1 votes) · EA · GW

Yes, it's hard to point to exactly what I'm talking about, and perhaps even somewhat speculative since the modern world doesn't have too much suffering. Let me highlight cases that could change my mind: Soldiers often have PTSD, and I suspect some of this is due to the horrifying nature of what they see. If soldiers' PTSD was found to be entirely caused by lost friends and had nothing to do with visual experience, I would reduce my credence on this point.

Let me note that I agree (and think it’s uncontroversial) that people often have extreme emotional reactions (including moral reactions) to seeing things like people blown to bits in front of them. So this doesn’t seem like a crux in our disagreement (I think everyone, whatever their metaethical position, endorses this point).

This seems plausible to me, and I don't claim that pleasure/pain serve as the only ostensive root grounding moral language. Perhaps (un)fairness is even more prominent, but nevertheless I claim that this group of ostensive bases (pain, unfairness, etc.) is necessary to understand some of moral language's distinctive features… Perhaps some of these "involuntary immediate reaction"s are best described as reactions to unfairness. For brevity let me refer below to this whole family of ostensive bases by Shared Moral Base, SMB.

OK, so we also agree that people may have a host of innate emotional reactions to things (including, but not limited to valenced emotions).

This is the key point. Why do we express disapproval of others when they don't disapprove of the person who did the immoral act? I claim it's because we expect them to share certain common, basic reactions e.g. to pain, unfairness, etc and when these basic reactions are not salient enough in their actions and their mind, we express disapproval to remind them of SMB… To return to my example of "a world filled with people whose innate biases varied randomly", in that world we would not find it fruitful to disapprove of others when they didn't disapprove of you. Do you not agree that disapproval would have less significance in that world?

I think I responded to this point directly in the last paragraph of my reply. In brief: if no-one could ever be brought to share any moral views, this would indeed vitiate a large part (though not all) of the function of moral language. But this doesn’t mean “that the meaning of the moral terms depends on or involves consensus about the rightness or wrongness of specific moral things.” All that is required is “some degree of people being inclined to respond to the same kinds of things or to be persuaded to share the same attitudes. But this doesn’t require any particularly strong, near-universal consensus or consensus on a particular single thing being morally good/bad.”

To approach this from another angle: suppose people are somewhat capable of being persuaded to share others views and maybe even, in fact, do tend to share some moral views (which I think is obviously actually true), although they may radically disagree to some extent. Now suppose that the meaning of moral language is just something like what I sketched out above (i.e. I disapprove of people who x, I disapprove of those who don’t disapprove of those who x etc.).* In this scenario it seems completely possible for moral language to function even though the meaning of moral terms themselves is (ex hypothesi) not tied up in any way with agreement that certain specific things are morally good/bad.

*As I argued above, I also think that such a language could easily be learned without consensus on certain things being good or bad.

I agree that it would be natural to call "Hurting people is good" a use of moral language on the part of the madman. I only claim that we can have a different, more substantial, kind of disagreement within our community of people who share SMB than we can with the madman cases in which our conversations are founded on SMB have a distinctive character which is of great importance.

Hmm, it sounds like maybe you don’t think that the meaning of moral of moral terms is tied to certain specific things being judged morally good/bad at all, in which case there may be little disagreement regarding this thread of the discussion.

I agree that moral disagreement between people who share some moral presuppositions has something of a distinctive character from discourse between people who don’t share any moral presuppositions. In the real world, of course, there are always some shared background presuppositions (broadly speaking) even if these are not always at all salient to disagreement.

That said, I don’t know whether I endorse your view about the role of the Shared Moral Base. As I noted above, I do think that there are a host of moral reactions which are innate (Moral Foundations, if you will). But I don’t think these or applications of these play an ‘ostensive’ role (I think we have innate dispositions to respond in certain ways intuitively, but our actual judgements and moral theories and concepts get formed in a pretty environmentally and socially contingent way, leading to a lot of fuzziness and indeterminacy). And I don’t privilege these intuitive views as particularly foundational in the philosophical sense (despite the name).

This leads us back into the practical conclusions in your OP. Suppose that a moral aversion to impure, disgusting things is innate (and arguably one of the most basic moral dispositions). It still seems possible that people routinely overcome and override this basic disposition and just decide that impurity doesn’t matter morally and disgusting things aren’t morally bad (perhaps especially when, as in modern industrialised countries, impure things typically don’t really pose much of a threat). It doesn’t seem to me like we have any particular reason to privilege these basic intuitive responses as foundational, in cases where they conflict with our more abstruse reasoning.

Comment by david_moss on My Meta-Ethics and Possible Implications for EA · 2020-08-08T16:30:51.333Z · score: 2 (1 votes) · EA · GW

Apologies in advance for long reply.

When you say "People routinely seem to think" and "People sometimes try to argue", I suspect we're talking past each other. I am not concerned with such learned behaviors, but rather with our innate neurologically shared emotional response to seeing someone suffering. If you see someone dismembered it must be viscerally unpleasant. If you see someone strike your mother as a toddler it must be shocking and will make you cry

Thanks for clarifying. This doesn't change my response though since I don't think there's a particularly notable convergence in emotional reactions to observing others in pain which would serve to make valenced emotional reactions a particularly central part of the meaning of moral terms. For example, it seems to me like children (and adults) often think that seeing others in pain is funny (c.f. punch and judy shows or lots of other comedy), fun to inflict and often well-deserved. And that's just among modern WEIRD children, who tend to be more Harm focused than non-WEIRD people.

Plenty of other things seem equally if not more central to morality (though I am not arguing that these are central, or part of the meaning of moral terms). For example, I think there's a good case that people (and primates for that matter) have innate moral reactions to (un)fairness: if a child is given some ice cream and is happy but then their sibling is given slightly more ice cream and is happy, they will react with moral outrage and will often demand either levelling down their sibling (at a cost to their pleasure) or even just directly inflicting suffering on their sibling. Indeed, children and primates (as well as adults) often prefer that no-one get anything than that an unjust allocation be made, which seems to count somewhat against any simple account of pleasant experience. I think innate reactions to do with obedience/disobedience and deference to authority, loyalty/betrayal, honesty/dishonesty etc. are equally central to morality and equally if not more prominent in the cases through which we actually learn morality. So it seems a bunch of other innate reactions may be central to morality and often morally mandate others suffering, so it doesn't seem likely to me that the very meaning of moral terms can be distinctively tied to the goodness/badness of valenced experience. Notably, it seems like a very common feature (until very recently in advanced industrial societies anyway) of cases of children's initial training in morality involved parents or others directly inflicting pain on children when they did something wrong and often the thing they did wrong seems like it has little or nothing to do with valenced experience, nor is it explained in these terms. This seems hard to square with the meaning of moral terms being rooted in the goodness/badness of valenced experience.

Exciting, perhaps we've gotten to the crux of our disagreement here! How do we learn what cases are have "aptness for disapproval"? This is only possible if we share some initial consensus over what aptness for disapproval involves. I suggest that this initial consensus is the abovementioned shared aversion to physical suffering.

Just to clarify one thing: when I said that "It is morally right that you give me $10" might communicate (among other things) that you are apt for disapproval if you don't give me $10 (which is not implied by saying "I desire that you give me $10"), I had in mind something like the following: when I say "It is morally right that you give me $10" this communicates inter alia that I will disapprove of you if you don't give me $10, that I think think it's appropriate for me to so disapprove, that I think others should disapprove of you and I would disapprove of them if they don't etc. Maybe it involves a bunch of other attitudes and practical implications as well. That's in contrast to me just saying "I desire that you give me $10" which needn't imply any of the above. That's what I had in mind by saying that moral terms may communicate that I think you are apt for disapproval if you do something. I'm not sure how you interpreted "apt[ness] for disapproval" but it sounds from your subsequent comments like you think it means something other than what I mean.

I think the fundamental disagreement here is that I don't think we need to learn what specific kinds of cases are (considered) morally wrong in order to learn what "morally wrong" means. We could learn, for example, that "That's wrong!" expresses disapproval without knowing what specific things people disapprove and even if literally everyone entirely disagrees about what things are to be disapproved of. I guess I don't really understand why you think that there needs to be any degree consensus about these first order moral issues (or what makes things morally wrong) in order for people to learn the meaning of moral terms, or to distinguish moral terms from terms merely expressing desires.

In effect, your task as a toddler is to figure out why your parents sometimes say "that was wrong, don't do that" instead of "I didn't like what you did, don't do that". I suggest the "that was wrong" cases more often involve a shared reaction on your part -- prototypically when your parents are referring to something that caused pain

I agree that learning what things my parents think are morally wrong (or what things they think are morally wrong vs which things they merely dislike) requires generalizing from specific things they say are morally wrong to other things. It doesn't seem to me that learning what it means for them to say that such and such is morally wrong vs what it means for them to say that they dislike something requires that we learn what specific things people (specifically or in general) think morally wrong / dislike.

To approach this from another angle: perhaps the reason why you think that it is essential to learning the meaning of moral terms (vs the meaning of liking/desiring terms) that we learn what concrete things people think are morally wrong and generalise from that, is because you think that we learn the meaning of moral terms primarily from simple ostension. i.e. we learn that "wrong" refers to kicking people, stealing things, not putting our toys away etc. (whereas we learn that "I like this" refers to flowers, candy, television etc.), and we infer what the terms mean primarily just from working out what general category unites the "wrong" things and what unites the "liked" things and reference to these concrete categories play a central role in fixing the meaning of the terms.

But I don't think we need to assume that language learning operates in this way (which sounds reminiscent of the Augustinian picture of language described at the beginning of PI). I think we can learn the meaning of terms by learning their practical role: e.g. that "that's morally wrong" implies various things practical things about disapproval (including that you will be punished if you do a morally bad thing, that you yourself will be considered morally bad and so face general disapproving attitudes and social censure from others) whereas "I don't like that" doesn't carry those implications. I think we find the same thing for various terms, where we find their meaning consists in different practical implications rather than fixed referents or fixed views about what kinds of things warrant their application (hence people can agree about the meaning of the terms but disagree about to which cases they should be applied to: which seems particularly common in morality).

Also, I recognise that you might say "I don't think that the meaning is necessarily set by specific things being agreed to be wrong- but it is set by a specific attitude which people take/reaction which people have, namely a negative attitude towards people experiencing negatively valenced emotions" (or some such). But I don't think this changes my response, since I don't think a shared reaction that is specifically about suffering need be involved to set the meaning of moral terms. I think the meaning of moral terms could consist in distinctive practical implications (e.g. you'll be punished and I would disapprove of others who don't disapprove of you- although of course I think the meaning of moral terms is more complex than this) which aren't implied by mere expressions of desire or distaste etc.

Another way of seeing why the core cases of agreement (aka the ostensive basis) for moral language is so important, is to look at what happens when someone disagrees with this basis: Consider a madman who believes hurting people is good and letting them go about their life is wrong. I suspect that most people believe we cannot meaningfully argue with him.

I agree that it might seem impossible to have a reasoned moral argument with someone who shares none of our moral presuppositions. But I don't think this tells us anything about the meaning of moral language. Even if we took for granted that the meaning of "That's wrong!" was to simply to express disapproval, I think it would still likely be impossible to reason with someone who didn’t share any moral beliefs with us. I think it may simply be impossible in general to conduct reasoned argumentation with someone who we share no agreement about reasons at all.

What seems to matter to me, as a test of the meaning of moral terms, is whether we can understand someone who says "Hurting people is good" as uttering a coherent moral sentence and, as I mentioned before, in this purely linguistic sense I think we can. There’s an important difference between a madman and someone who’s not competent in the use of language.

how do you distinguish between these two madmen and more sensible cases of moral disagreement?

I don’t think there’s any difference, necessarily, between these cases in terms of how they are using moral language. The only difference consists in how many of our moral beliefs we share (or don’t share). The question is whether, when we faced with someone who asserts that it’s good for someone to suffer or morally irrelevant whether some other person is having valenced experience and that what matters is whether one is acting nobly, whether we should diagnose these people as misspeaking or evincing normal moral disagreement. Fwiw I think plenty of people from early childhood training to advanced philosophy use moral language in a way which is inconsistent with the analysis that “good”/“bad” centrally refer to valenced experience (in fact, I think the vast majority of people, outside of EAs and utilitarians, don’t use morality in this way).

In a world filled with people whose innate biases varied randomly, and who had arbitrary aversions, one could still meaningfully single out a subset of an individual's preferences which had a universalisable character -- i.e. those preferences which she would prefer everyone to hold. However, peoples' universalisable preferences would hold no special significance to others, and would function in conversation just as all other preferences do. In contrast, in our world, many of our universalisable preferences are shared and so it makes sense to remind others of them. The fact that these universalisable preferences are shared makes them "apt for dissaproval" across the whole community, and this is why we use moral language.

I actually agree that if no-one shared (and could not be persuaded to share) any moral values then the use of moral language could not function in quite the same way it does in practice and likely would not have arisen in the same way it does now, because a large part of the purpose of moral talk (co-ordinating action) would be vitiated. Still, I think that moral utterances (with their current meaning) would still make perfect sense linguistically, just as moral utterances made in cases of discourse between parties who fundamentally disagree (e.g. people who think we should do what God X says we should do vs people who think we should do what God Y says we should do) still make perfect sense.

Crucially, I don't think that, absent moral consensus, moral utterances would reduce to "function[ing] in conversation just as all other preferences do." Saying "I think it is morally required for you to give me $10" would still perform a different function than saying "I prefer that you to give me $10" for the same reasons I outlined above. The moral statement is still communicating things other than just that I have an individual preference (e.g. that I'll disapprove of you for not doing so, endorse this disapproval, think that others should disapprove etc.). The fact that, in this hypothetical world where no-one shares any consensus about moral views nor could be persuaded to agree on any moral views and this would severely undermine the point of expressing moral views doesn't imply that the meaning of moral terms depends on reference to the objects of concrete agreement. (Note that it wouldn't entirely undermine the point of expressing moral views either: it seems like there would still be some practical purpose to communicating that I disapprove and endorse this disapproval vs merely that I have a preference etc.)

I also agree that moral language is often used to persuade people who share some of our moral views or to persuade people to share our moral views, but don't think this requires that the meaning of the moral terms depends on or involves consensus about the rightness or wrongness of specific moral things. For moral talk to be capable of serving this practical purpose we just need some degree of people being inclined to respond to the same kinds of things or to be persuaded to share the same attitudes. But this doesn’t require any particularly strong, near-universal consensus or consensus on a particular single thing being morally good/bad. It also need not require that there’s some specific things that people are inclined to agree on- it could rather, be that people are inclined to defer to the moral views of authorities/their group and this ensures some degree of consensus regardless). This seems compatible with very, very widespread disagreement in fact: it might be that people are disposed to think that some varying combinations of “fraternity, blood revenge, family pride, filial piety, gavelkind, primogeniture, friendship, patriotism, tribute, diplomacy, common ownership, honour, confession, turn taking, restitution, modesty, mercy, munificence, arbitration, mendicancy, and queuing” (list ripped off from Oliver Scott Curry are good, and yet disagree with each other to a large extent about which of these are valuable and to what extent and how they should be applied in particular cases. Moral language could still serve a function as people use it simply to express which of these things they approve or disapprove of and expect others to likewise promote or punish, without there being general consensus about what things are wrong and without the meaning of moral terms definitionally being fixed with reference to people’s concrete (and contested and changing) moral views.

Comment by david_moss on My Meta-Ethics and Possible Implications for EA · 2020-08-07T10:38:52.441Z · score: 2 (1 votes) · EA · GW

JP:
>I believe that we have much greater overlap in our emotional reaction to experiencing certain events e.g. being hit, and we have much greater overlap in our emotional reaction to witnessing certain painful events e.g. seeing someone lose their child to an explosion.

I agree individuals tend to share an aversion to themselves being in pain. I don't think there's a particularly noteworthy consensus about it being bad for other people to be in pain or that it's good for other people to have more pleasure. People routinely seem to think that it's good for others to suffer and be indifferent about others experiencing more pleasure. People sometimes try to argue that people really only want people to suffer in order to reduce suffering, for example, but this doesn't strike me as particularly plausible or as how people characterise their own views when asked. So valenced experience doesn't strike me as having a particularly central place in ordinary moral psychology IMO. 

>I'm not clear on how it is distinct from desire and other preferences? If we did not have shared aversions to pain, and a shared aversion to seeing someone in pain, then moral language would no longer be distinguishable from talk of desire. I suspect you again disagree here, so perhaps you could clarify how, on your account, we learn to distinguish moral injunctions from personal preference based injunctions?

Sure, I just think that moral language differs from desire-talk in various ways unrelated to the specific objects under discussion, i.e. they express different attitudes and perform different functions. For example, if I say "I desire that you give me $10" merely communicates that I would like you to give me $10, there's no implication that you would be apt for disapproval if you didn't. But if I say "It is morally right that you give me $10" this communicates that you would be wrong not to give me $10 and would be apt for disapproval if you did not. (I'm not committed to this particular analysis of the meaning of moral terms of course, this is just an example). I think this applies even if we're referring to pleasure/pain. One can sensibly say "I like/don't like this pleasant/painful sensation" without thereby saying "It is morally right that you act to promote/alleviate my experience" or one could say "It is/is not morally right that you act to promote/alleviate my experience."

Comment by david_moss on My Meta-Ethics and Possible Implications for EA · 2020-08-07T10:38:31.231Z · score: 2 (1 votes) · EA · GW

>When it comes to mathematics, I found the arguments in Kripke's 'Wittgenstein on Rules and Private Language' quite convincing. I would love to see someone do an in depth translation applying everything Kripke says about arithmetic to total utilitarianism. I think this would be quite useful, and perhaps work well with my ideas here.

That makes sense. I personally think that "Kripkenstein's" views are quite different from Wittgenstein's own views on mathematics. 

It seems there's a bit of a disanalogy between the case of simple addition and the case of moral language. In the case of addition we observe widespread consensus (no-one feels any inclination to start using quus for whatever reason). Conversely it seems to me that moral discourse is characterised by widespread disagreement i.e. we can sensibly disagree about whether it's right or wrong to torture, whether it's right or wrong for a wrongdoer to suffer, whether it's good to experience pleasure if it's unjustly earned and so on. This suggests to me that moral terms aren't defined by reference to certain concrete things we agree are good.


>Yes, I agree that what I've been doing looks a lot like language policing, so let me clarify. Rather than claiming talk of population ethics etc. is invalid or incoherent, it would be more accurate to say I see it as apparently baseless and that I do not fully understand the connection with our other uses of moral language... insofar as they expect me to follow along with this extension (indeed insofar as they expect their conclusions about population ethics to have force for non-population-ethicists) they must explain how their extension of moral language follows from our shared ostensive basis for moral language and our shared inductive biases. My arguments have attempted to show that our shared ostensive basis for moral language does not straight-forwardly support talk of population ethics, because such talk does not share the same basis in negatively/positively valenced emotions.

OK so it sounds like the core issue here is the question of whether moral terms are defined at their core by reference to valenced emotions then, which I'll continue discussing in the other thread.

Comment by david_moss on vaidehi_agarwalla's Shortform · 2020-08-07T06:46:12.388Z · score: 6 (2 votes) · EA · GW

My sense is that the idea of sequential stages for moral development is exceedingly likely to be false and in the case of the most prominent theory of this kind, Kolhlberg's, completely debunked in the sense that there was never any good evidence for it (I find the social intuitionist model much more plausible), so I don't see much appeal to trying to understand cause selection in these terms.

That said, I'm sure there's a rough sense in which people tend to adopt less weird beliefs before they adopt more weird ones and I think that thinking about this in terms of more/less weird beliefs is likely more informative than thinking about this in terms of more/less distant areas in a "moral circle".

I don't think there's a clear non-subjective sense in which causes are more or less weird though. For example, there are many EAs who value the wellbeing of non-actual people in the distant future and not suffering wild animals and vice versa, so which is weirder or more distant from the centre of this posited circle? I hear people assume conflicting answers to this question from time to time (people tend to assume their area is less weird).

I would also agree that getting people to agree to beliefs which are less far from what they currently believe can make them more positively inclined to subsequently adopt beliefs related to that belief which are further from their current beliefs. It seems like there are a bunch of non-competing reasons why this could be the case though. For example:

  • Sometimes belief x1 itself gives a person epistemic reason to believe x2
  • Sometimes believing x1 increases your self-identity as a person who believes weird things, making you more likely to believe weird things
  • Sometimes believing x2 increases your affiliation with a group associated with x1 (e.g. EA) making you more likely to believe x3 which is also associated with that group

Notably none of these require that we assume anything about moral circles or general sequences of belief.

Comment by david_moss on My Meta-Ethics and Possible Implications for EA · 2020-08-06T11:37:40.611Z · score: 2 (1 votes) · EA · GW

>I privilege uses of moral language as applied to experiences and in particular pain/pleasure because these are the central cases over which there is agreement, and from which the other uses of moral language flow... I do agree that injunctions may perhaps be the first use we learn of 'bad', but the use of 'bad' as part of moral language necessarily connects with its use in referring to pain and pleasure, otherwise it would be indistinguishable from expressions of desire/threats on the part of the speaker.

OK, on a concrete level, I think we just clearly just disagree about how central references to pleasure and pain are in moral language or how necessary they are. I don't think they are particularly central, or even that there is much more consensus about the moral badness of pain/goodness of pleasure than about other issues (e.g. stealing others' property, lying, loyalty/betrayal). 

It also sounds like you think that for us to learn the meaning of moral language there needs to be broad consensus about the goodness/badness of specific things (e.g. pleasure/pain). I don't think this is so. Take the tastiness example: we don't need people to agree even slightly about whether chocolate/durian are tasty or yucky to learn the meanings of terms. We can observe that when people say chocolate/durian is tasty they go "mmm", display characteristic facial expressions and eat more of it and seek to acquire more in the future, whereas when they say chocolate/durian is yucky they say "eugh" display other characteristic facial expressions, stop eating it and show disinterest in acquiring more in the future. We don't need any agreement at all, as far as I can tell, about which specific things are tasty or yucky to learn the meaning of the terms. Likewise with moral language, I don't think we broadly need widespread agreement about whether specific things are good/bad to learn that if someone says something is "bad" this means they don't want us to do it, they disapprove of it and we will be punished if we do it etc. Generally I don't think there's much connection between the meaning of moral terms and specific things being good or bad: this is what I mean when I said "But I'm not sure why we should expect any substantive normative answers [i.e. specific things being good or bad on the first order level] to be implied by the meaning of moral language"- nothing to do with a particular conception of "normativity."

Comment by david_moss on My Meta-Ethics and Possible Implications for EA · 2020-08-06T11:37:21.505Z · score: 2 (1 votes) · EA · GW

Thanks for your reply. I'm actually very sympathetic to Wittgenstein's account of language: before I decided to move to an area with higher potential impact, I had been accepted to study for a PhD on the implications of Wittgensteinian meta-philosophy for ethics. (I wouldn't use the term metaphilosophy in this context of course, since I was largely focused on the view expressed in PI 119 that "…we may not advance any kind of theory. There must not be anything hypothetical in our considerations. We must do away with all explanation, and description alone must take its place.")

All that said, it seems we disagree in quite a few places.

DM:

It sounds like you see the genealogy of moral terms as involving a melange of all of these, which seems to leave the door quite open as to what moral terms actually mean.

JP:

I disagree, there is no other actual meaning beyond the sequence of uses we learn for these words.

I don't think our use of language is limited to the kinds of cases through which we initially learn the use of particular terms. For example, we learn the use of numbers through exceptionally simple cases "If I have one banana and then another banana, I have two bananas" and then later get trained in things like multiplication etc., but then we clearly go on to use mathematical language in much more complex and creative ways, which include extending the language in radical ways. It would be a mistake to conclude that we can't do these things because they go beyond the uses we initially learn and note that Wittgenstein doesn't say this either in his later work in the philosophy of mathematics. I agree it's a common Wittgensteinian move to say that our use of language breaks down when we extend it inappropriately past ordinary usage- but if you look at Wittgenstein's treatment of mathematics it certainly does not tell mathematicians to stop doing the very complex mathematical speculation which is far removed from the ways in which we are initially trained in mathematics. Indeed, I think it's anti-Wittgensteinian to attempt to interfere with or police the way people ordinarily use language in this way. Of course, the Wittgensteinian can call into question certain ways of thinking (e.g. that our ordinary mathematical practice implies Platonism), although we need to do careful philosophical work to highlight potential problems with specific ways of thinking. Fwiw, it seems to me like your conclusions stray into telling ordinary moral language users that they can't use moral language (or think about moral considerations) that they otherwise do or would, though of course it would require more discussion of your precise position to determine this.

But that aside, it still seems to me to be the case that how we actually ordinarily use moral language is left quite open by your account of how we learn moral language, since you say it includes a mix of "reactions [which] include approval, preferences and beliefs." That seems compatible, to me, with us coming to use moral language in a wide variety of ways. Of course, you could argue for a more specific genealogy of how we come to use moral language, explaining why we come to only (or at least primarily) use it to convey certain specific attitudes of (dis)approval or preferences or beliefs about preferences.

It seems like your own account of how we learn language involves us extending the use of moral language too: from first learning that bad things are disapproved (e.g. our parents disapprove of us burning ourselves in fires), then we "extend our use of moral language beyond the[se] simple cases" to introduce preferences, and (at some point) beliefs. So if you allow that much, it doesn't seem clear why we should think that our uses of moral language are still properly limited to the kinds of uses which are (ex hypothesi) part of our initial training. It seems quite conceivable to me that we initially learn moral language in something like the way you describe, but then collectively move on to almost any number of more complex uses such as considering what we would collectively endorse in such and such scenarios. And once we go that far (which I think we should in order to adequately account for how we see people actually using moral language) I don't think we're in a position where we can rule out as impossible baroque speculations about population ethics etc.

Comment by david_moss on My Meta-Ethics and Possible Implications for EA · 2020-08-05T19:12:10.116Z · score: 2 (1 votes) · EA · GW

The learned meaning of moral language refers to our recollection/reaction to experiences. These reactions include approval, preferences and beliefs... Preferences enter the picture when we try to extend our use of moral language beyond the simple cases learned as a child. When we try to compare two things that are apparently both bad we might arrive at a preference for one over the other, and in that case the preference precedes the statement of approval/disapproval.

Thanks for the reply. I guess I'm still confused about what specific attitudes you see as involved in moral judgments, whether approval, preferences, beliefs or some more complex combination of these etc. It sounds like you see the genealogy of moral terms as involving a melange of all of these, which seems to leave the door quite open as to what moral terms actually mean.

It does sound though, from your reply, that you do think that moral language exclusively concerns experiences (and our evaluations of experiences). If so, that doesn't seem right to me. For one, it seems that the vast majority of people (outside of welfarist EA circles) don't exclusively or even primarily make moral judgements or utterances which are about the goodness or badness of experiences (even indirectly). It also doesn't seem to me like the kind of simple moral utterances which ex hypothesi train people in the use of moral language at an early age primarily concern experiences and their badness (or preferences for that matter). It seems equally if not more plausible to speculate that such utterances typically involve injunctions (with the threat of punishment and so on).

Thanks for bringing up the X,Y,Z point; I initially had some discussion of this point, but I wasn't happy with my exposition, so I removed it. Let me try again: In cases when there are multiple moral actors and patients there are two sets of considerations. First, the inside view, how would you react as X and Y. Second, the outside view, how would you react as person W who observes X and Y. It seems to me that we learn moral language as a fuzzy mixture of these two with the first usually being primary.

Thanks for addressing this. This still isn't quite clear to me i.e. what exactly is meant by 'how would you react as person W who observes X and Y'? What conditions of W observing X and Y are required?. For example, does it only specifically refer to how I would react if I were directly observing an act of torture in the room or does it permit broader 'observations' i.e. I can observe that there is such-and-such level of inequality in the distribution of income in a society. The more restrictive definitions don't seem adequate to me to capture how we actually use moral language, but the more permissive ones, which are more adequate, don't seem to suffice to rule out me making judgements about the repugnant conclusion and so on.

Much as with population ethics, I suspect this endeavor should be seen as... beyond the boundary of where our use of language remains well-defined.

I agree that answers to population ethics aren't directly entailed by the definition of moral terms. But I'm not sure why we should expect any substantive normative answers to be implied by the meaning of moral language. Moral terms might mean "I endorse x", but any number of different considerations (including population ethics, facts about neurobiology) might be relevant to whether I endorse x (especially so if you allow that I might have all kinds of meta-reactions about whether my reactions are based on appropriate considerations etc.).

Comment by david_moss on Where the QALY's at in political science? · 2020-08-05T10:07:52.363Z · score: 5 (4 votes) · EA · GW

Effective Thesis has some suggested topics within political science.

Comment by david_moss on Replaceability Concerns and Possible Responses · 2020-08-04T16:17:44.703Z · score: 13 (4 votes) · EA · GW

It is somewhat surprising the EA job market is so competitive. The community is not terribly large. Here is an estimate...This suggests to me a very large fraction of highly engaged EAs are interested in direct work.

We have data from our careers post which addresses this. 688 (36.6% of respondents to that question) indicated that they wanted to pursue a career in an EA non-profit. That said, this was a multi-select question so people could select this alongside other options. Also 353 people reported having applied to an EA org for a job. There were 207 people who indicated they currently work at an EA org which, if speculatively we take that as a rough proxy for current positions, suggests a large mismatch between people seeking positions and total positions. 

Of those who included EA org work within their career paths and were not already employed in an EA org, 29% identified as "highly engaged" (defined with examples such as having worked in an EA org or leading a local group). A further 32% identified with the next highest level of engagement, which includes things like "attending an EA Global conference, applying for career coaching, or organizing an EA meetup." Those who reported applying for an EA org job were yet more highly engaged: 37.5% "highly engaged" and 36.4% the next highest level of engagement.

Comment by david_moss on My Meta-Ethics and Possible Implications for EA · 2020-08-04T10:37:28.339Z · score: 3 (2 votes) · EA · GW

Thanks for the post.  

I found myself having some difficulty understanding the core of your position. Specifically, I'm not sure whether you're claiming that the meaning of moral language is to do with how we would react (what we would approve/disapprove of) in certain scenarios or whether you are specifically claiming that moral language is about experiences and our reactions if we were to experience certain things or even, specifically, what we would prefer if we were to experience certain things or what we would believe if we experienced certain things.

Note that there are lots of variations within the above categories, of course. For example, if morality is about what we would believe if we lived the relevant experiences, it's not clear to me whether this means what I would believe about whether X should torture Y, if I were Y being tortured, if I were X torturing Y, or if I were Z who had experienced both and then combined that with my own moral dispositions etc.

Either way, I'm not sure that the inclusion of meta-reactions and the call to universality (which I agree are necessary to make this form of expressivism plausible) permit the conclusions you draw.

For example you write: "it seems that personal experience with animals (and their suffering) becomes paramount overriding evidence from neuron counts, self-awareness experiments and the like." But if you allow that I can be concerned with whether my own reactions are consistent, impartial and proportionate to others' bad experiences, then it seems like I can be concerned with whether helping chickens or helping salmon causes there to be fewer bad experiences, or with whether specific animals are having negative experiences at all. And if so. it seems like I should be concerned about what the evidence from neuron counts, self-awareness experiments etc. would tell us about the extent to which these creatures are suffering. Moral claims being about what my reactions would be in such-and-such circumstance doesn't give me reason to privilege my actual reactions upon personal experiences (in current circumstances). Doing so seems to imply that when I'm thinking about whether, say, swatting a fly is wrong, I should simply ask myself what my reactions would be if I swatted a fly; but that doesn't seem plausible as an account of how we actually think morally, where what I'm actually concerned about (inter alia) is whether the fly would be harmed if I swatted it.

Comment by david_moss on 3 suggestions about jargon in EA · 2020-07-07T09:50:07.003Z · score: 4 (2 votes) · EA · GW

Academia, especially in the social sciences and humanities, also strikes me as being extremely pro-concealment (either actively or more commonly passively, by believing we should not gather information in the first place) on topics which they actually view as objectionable for explicitly altruistic reasons.

Comment by david_moss on Resources to learn how to do research · 2020-07-04T11:05:49.272Z · score: 13 (6 votes) · EA · GW

If you are interested in EA research/an EA research job, I would recommend just reading EA research on this forum and on the websites of EA research organisations. Much of this research doesn't involve any research method beyond general desk/secondary research, i.e. reading relevant literature and synthesising it.

In the cases where you see EA research relies on some specific technical methodology, such as stats, cost-effectiveness modelling, surveys etc., I would just recommend googling the specific method and finding resources that way. In general, I think there are too many different methods and approaches even within these categories, for it to be too helpful to link to a general introduction to stats (although here's one, for example, since depending on what you want to do, a lot won't be relevant.

Comment by david_moss on EA Survey 2019 Series: How many people are there in the EA community? · 2020-07-04T09:34:48.356Z · score: 8 (3 votes) · EA · GW

I think "been influenced by EA to do EA-like things" covers a very wide array of people.

In the most expansive sense, this seems like it would include people who read a website associated with EA (this could be Giving What We Can, GiveWell, The Life You Can Save or ACE or others...) decide "These sound like good charities" and donate to them. I think people in this category may or may not have heard of EA (all of these mention effective altruism somewhere on the website) and they may even have read some specific formulation that expresses EA ideas (e.g. "We should donate to the most effective charity") and decided to donate to these specific charities as a result. But they may not really know or understand what EA means (lots of people would platitudinously endorse 'donating to to the best charities') or endorse it, let alone identify with or be involved with EA in any other way.

I agree that there are many, many more people who are in this category. As we note in footnote 7, there are literally millions of people who've read the GiveWell website alone, many of whom (at least 24,000) will have been moved to donate. Donating to a charity influenced by EA principles was the most commonly reported activity in the EA survey by a long way, with >80% of respondents reporting having done so, and >60% even among the second lowest level of engagement.

I think we agree that while getting people to donate to effective charities is important (perhaps even more impactful than getting people to 'engage with the effective altruism community' in a lot of cases) these people, don't count as part of the EA community in the sense discussed here. But I think they also wouldn't count as part of the "wider network of people interested in effective altruism" that David Nash refers to (i.e. because many of them aren't interested in effective altruism).

I think a good practical test would be: if you went to some of these people who were moved to donate to a GiveWell/ACE etc. charity and said "Have you heard that many adherents of effective altruism, believe that we should x?", if their response is some variation on "What's that?" or "Why should I care?" then they're not part of the community or network of people interested in EA. I think this is a practically relevant grouping because this tells you who could 'be influenced by EA to do EA things', where we understand "influenced by EA" to refer to EA reasoning and arguments and "EA things" to refer to EA things in general, as opposed to people who might be persuaded by an EA website to do some specific thing which EAs currently endorse but who would not consider anything else or consider maximising effectiveness more generally.

Comment by david_moss on EA Survey 2019 Series: How many people are there in the EA community? · 2020-07-02T08:08:59.730Z · score: 5 (3 votes) · EA · GW

Thanks for the reply!

So then it is a question of whether action or identification is more important-I would favor action.

This is the kind of question I had in mind when I said: "Of course, being part of the “EA community” in this sense is not a criterion for being effective or acting in an EA manner- for example, one could donate to effective charity, without being involved in the EA community at all..."

It seems fairly uncontroversial to me that someone who does a highly impactful, morally motivated thing, but hasn't even heard of the EA community, doesn't count as part of the EA community (in the sense discussed here).

I think this holds true even if an activity represents the highest standard that all EAs should aspire to. The fact that something is the highest standard that EAs should aspire to doesn't mean that many people might not undertake the activity for reasons unrelated to EA, and I think those people would fall outside the "EA community" in the relevant sense, even if they are doing more than many EAs.

Comment by david_moss on EA Survey 2019 Series: How many people are there in the EA community? · 2020-07-01T19:42:44.842Z · score: 2 (1 votes) · EA · GW

I agree this would both not be very inspiring and risk sounding elitist. I don't have any novel ideas, I would probably just say something vague about wanting to spread the ideas carefully and ensure they aren't lost or distorted in the mass media and try to redirect the topic.

Comment by david_moss on EA Survey 2019 Series: How many people are there in the EA community? · 2020-07-01T19:39:55.639Z · score: 3 (2 votes) · EA · GW

We'll be addressing this indirectly in the next couple of posts as it happens.

Comment by david_moss on Dignity as alternative EA priority - request for feedback · 2020-06-26T17:45:21.317Z · score: 20 (9 votes) · EA · GW

I'm not entirely clear as to whether you are applying the INT/neglectedness, solvability and scale framework to dignity as a fundamental value or to dignity-promotion as a cause area for EA (according to EA values, however we determine them).

The INT framework is usually applied as a heuristic for broad cause area selection and I don't think it works well as a heuristic for determining fundamental values. Things which are valuable are fundamentally valuable even if they are not neglected and estimating their Importance/Scale seems crucially to depend on whether and how far they are fundamentally valuable, even if they affect lots of people. Maybe it would be helpful to think more about which potential values are neglected or likely to be more or less tractable to satisfy, in order to determine whether we should dedicate more resources to trying to satisfy them, but I don't think just quickly running through the INT heuristic will be that informative.[^1]

If it's applied to the idea of dignity-promotion as a cause area (according to EA values), then it seems like we should judge it based on all our values (which for many EAs will largely determined by how well it promotes welfare, with small amounts of weight given to other values, such as dignity itself). It's not so clear that promoting-dignity performs well in those terms.

[^1] For example, I think that many minority/peripheral values that we could think up would be highly neglected, affect a lot of people, and be tractable, but this doesn't tell us much about their moral importance.

Comment by david_moss on Is it suffering or involuntary suffering that's bad, and when is it (involuntary) suffering? · 2020-06-22T18:38:30.708Z · score: 5 (3 votes) · EA · GW

My intuition is that suffering is bad, but sometimes (all things considered) I prefer to suffer in a particular instance (e.g. in service of some other value). In such cases it would be better for my welfare if I did not suffer, but I still prefer to.

I also think that in cases where one voluntarily suffers, then this can reduce the suffering involved. Relatedly, I also imagine that voluntarily experienced pain may lead to less suffering than coerced pain.

It also seems to me that there are cases where we directly want to experience a suffering-involving experience (e.g. watching tragedies and wanting to experience the feeling of tragedy). I think in many of these cases the experience is sad, but also involves (subtle) pleasures and what we want to experience is this combined set of emotions. In some such cases I'm sure people would prefer to experience the distinctive melancholy-pleasure emotion without the suffering valence if they could (but cannot imagine, let alone actually achieve this), and in other cases people would not with to detach the suffering from the emotion set (because they have preferences to have fitting responses to tragedy and so forth). I am sure there are a whole bunch of other factors which explain people propensity to voluntarily watch tragedies though e.g. affective forecasting misfires, instrumental goals like signalling, and feelings of compulsion (tragedies tend to be very salient and so adaptive to pay attention to, even if they entail suffering).

Comment by david_moss on How much do Europeans care about fish welfare? (An analysis of relevant surveys) · 2020-06-22T16:03:21.885Z · score: 15 (8 votes) · EA · GW

It seems like it would be valuable for advocates to better understand what level of support is necessary to undergird changes (whether through legislative efforts or through corporate campaigns/consumer pressure). Much progress seems to have been made on chickens, as you note, with only ~77% of people believing their welfare should at least "probably" be better protected. But it seems like we don't know what level of support is required, or even really how such support causally influences progress. The influence of such support seems like it may well be mediated by (decision-makers') perceptions of support, which is probably much vaguer.

Comment by david_moss on EA Forum feature suggestion thread · 2020-06-20T08:36:35.209Z · score: 29 (14 votes) · EA · GW

It would very dramatically improve my experience of the Forum if there were the option to hide posts. This would mean that the first page of the Forum would always be posts that were relevant to me. As it stands, whenever I visit the Forum most of the posts which I can see are not relevant to me (perhaps because I've already read them and don't want to read them again or check in on the ongoing discussion), whereas posts which are relevant to me and which I would want to visit again are invisible if they are more than a few days old.

Comment by david_moss on EA Survey 2019 Series: Community Information · 2020-06-13T10:58:59.510Z · score: 4 (2 votes) · EA · GW

Thanks for your comment Max!

it's unclear to me if respondents interpreted "EA job" as (a) "job at an EA organization" or (b) "high-impact job according to EA principles"

I agree. This was one of the questions which was requested externally, that I mentioned at the top of the post, which I included verbatim, so I don't know which was the intended meaning. The precise wordings were "Too hard to get an EA job" and "Not enough job opportunities that seemed like a good fit for me" which I agree could be interpreted more narrowly or more broadly.

To perhaps gain a little insight, we can cross-reference this with our data on respondents' career plans. Among those that included 'Work at an EA non-profit' in their plans (note that this was a multi-select question), 35.7% said that "Too hard to get an EA job" was a barrier to being more involved in EA. Conversely, among those that did not include working for an EA non-profit in their career plan, 20.1% selected this as a barrier. This is a significant difference (p<0.001), but notably it means that many participants who selected this as a barrier were not aiming to work in a specifically EA org. In fact, to put it another way, 49.3% of those who selected this as a barrier did not say they planned to work in an EA non-profit, whereas 50.7% did plan to work in an EA org (but note that many of these also included other routes, like academia, in their career plans, so it's not clear that it being too hard to work in an EA org specifically was what they viewed as a barrier). Of course, it's also possible that for some of these respondents it was because it was too hard to get a job in an EA org, which they viewed as a barrier, that they did not select EA org as part of their career plans.

Comment by david_moss on Some thoughts on deference and inside-view models · 2020-06-03T10:59:51.642Z · score: 8 (5 votes) · EA · GW

The common attitude was something like "we're utilitarians, and we want to do as much good as we can. EA has some interesting people and interesting ideas in it. However, it's not clear who we can trust; there's lots of fiery debate about cause prioritization, and we just don't at all know whether we should donate to AMF or the Humane League or MIRI. There are EA orgs like CEA, 80K, MIRI, GiveWell, but it's not clear which of those people we should trust, given that the things they say don't always make sense to us, and they have different enough bottom line beliefs that some of them must be wrong." It's much rarer nowadays for me to hear people have an attitude where they're wholeheartedly excited about utilitarianism but openly skeptical to the EA "establishment".

I actually agree that there seems to have been some shift roughly along these lines.

My view is roughly that EAs were equally disposed to be deferential then as they are now (if there were a clear EA consensus then, most of these EAs would have deferred to it, as they do now), but that "because the 'official EA consensus' (i.e. longtermism) is more readily apparent" now, people's disposition to defer is more apparent.

So I would agree that some EAs were actually more directly engaged in thinking about fundamental EA prioritisation because they did not see an EA position that they could defer to at all. But other EAs I think were deferring to those they perceived as EA experts back then, just as they are now, it's just that they were deferring to different EA experts than other EAs. For example, I think earlier years many EAs thought that Giving What We Can (previously an exclusively poverty org, of course) and GiveWell, were the EA experts, and meanwhile there were some 'crazy' people (MIRI and LessWrongers) who were outside the EA mainstream. I imagine this perspective was more common outside the Bay Area.

I feel like there are many fewer EA forum posts and facebook posts where people argue back and forth about whether to donate to AMF or more speculative things than there used to be.

Agreed, but I can't remember the last time I saw someone try to argue that you should donate to AMF rather than longtermism. I've seen more posts/comments/discussions along the lines of 'Are you aware of any EA arguments against longtermism?' Clearly there are still lots of EAs who donate to AMF and support near-termism (cause prioritisation, donation data), but I think they are mostly keeping quiet. Whenever I do see near-termism come up, people don't seem afraid to communicate that they think that it is obviously indefensible, or that they think even a third-rate longtermist intervention is probably incomparably better than AMF because at least it's longtermist.

Comment by david_moss on What are some good charities to donate to regarding systemic racial injustice? · 2020-06-02T08:48:58.373Z · score: 9 (8 votes) · EA · GW

I didn't downvote it, but some commenters might have done because an almost identical question was asked a few days ago.

Comment by david_moss on EA Survey 2019 Series: How EAs Get Involved in EA · 2020-05-28T18:16:23.171Z · score: 6 (3 votes) · EA · GW

I just added him a mention of this to the bullet point about these open comments.

Comment by david_moss on Some thoughts on deference and inside-view models · 2020-05-28T10:38:56.198Z · score: 21 (12 votes) · EA · GW

Most of us had a default attitude of skepticism and uncertainty towards what EA orgs thought about things. When I talk to EA student group members now, I don’t think I get the sense that people are as skeptical or independent-thinking.

I've heard this impression from several people, but it's unclear to me whether EAs have become more deferential, although it is my impression that many EAs are currently highly deferential. It seems quite plausible to me that it is merely more apparent that EAs are highly deferential right now, because the 'official EA consensus' (i.e. longtermism) is more readily apparent. I think this largely explains the dynamics highlighted in this post and in the comments. (Another possibility is simply that newer EAs are more likely to defer than veteran EAs and as EA is still growing rapidly, we constantly get higher %s of non-veteran EAs, who are more likely to defer. I actually think the real picture is a bit more complicated than this, partly because I think moderately engaged and invested EAs are more likely to defer than the newest EAs, but we don't need to get into that here).

My impression is that EA culture and other features of the EA community implicitly encourage deference very heavily (despite the fact that many senior EAs would, in the abstract, like more independent thinking from EAs). In terms of social approval and respect, as well as access to EA resources (like jobs or grants), deference to expert EA opinion (both in the sense of sharing the same views and in the sense of directly showing that you defer to senior EA experts) seem pretty essential.

I have the sense that people would now view it as bad behavior to tell people that you think they’re making a terrible choice to donate to AMF

Relatedly, my purely anecdotal impression is basically the opposite here. As EA has professionalised I think there are more explicit norms about "niceness", but I think it's never been clearer or more acceptable to communicate implicitly or explicitly, that you think that people who support AMF (or other near-termist) probably just 'don't get' longtermism and aren't worth engaging with.

Comment by david_moss on EA Survey 2019 Series: How EAs Get Involved in EA · 2020-05-27T18:36:59.190Z · score: 6 (3 votes) · EA · GW

Thanks Jon.

I agree Peter Singer is definitely still one of the most important factors, as our data shows (and as we highlighted last year. He's just not included in the bullet point in the summary you point to because that only refers to the fixed categories in the 'where did you first hear about EA?' question.

In 2018 I wrote "Peter Singer is sufficiently influential that he should probably be his own category", but although I think he deserves to be his own category in some sense, it wouldn't actually make sense to have a dedicated Peter Singer category alongside the others. Peter Singer usually coincides with other categories i.e. people have read one of his books, or seen one of his TED Talks, or heard about him through some other Book/Article or Blog or through their Education or a podcast or The Life You Can Save (org) etc., so if we split Peter Singer out into his dedicated category we'd have to have a lot of categories like 'Book (except Peter Singer)' (and potentially so for any other individuals who might be significant) which would be a bit clumsy and definitely lead to confusion. It seems neater to just have the fixed categories we have and then have people write in the specifics in the open comment section and, in general, not to have any named individuals as fixed categories.

The other general issue to note is that we can't compare the %s of responses to the fixed categories to the %s for the open comment mentions. People are almost certainly less likely to write in something as a factor in the open comment than they would be to select it were it offered as a fixed choice, but on the other hand, things can appear in the open comments across multiple categories, so there's really no way to compare numbers fairly. That said, we can certainly say that since he's mentioned >200 times, the lower bound on the number of people who first heard of EA from Peter Singer is very high.

Comment by david_moss on EA Survey 2019 Series: How EAs Get Involved in EA · 2020-05-23T09:31:48.250Z · score: 2 (1 votes) · EA · GW

Thanks. That makes sense. I try not to change the historic categories too much though, since it messes up comparisons across years.

Comment by david_moss on EA Survey 2019 Series: How EAs Get Involved in EA · 2020-05-22T21:30:02.842Z · score: 9 (5 votes) · EA · GW

I think it's fair to say (as I did) that LessWrong is often thought of as "primarily" online, and, given that, I think it's understandable to find it surprising that these are the second most commonly mentioned way people hear about EA within the LessWrong category (I would expect more comments mentioning SlateStarCodex and other rationalist blogs for example). I didn't say that "surprising that people mention LessWrong meetups" tout court. I would expect many people, even among those who are familiar with LessWrong meetups, to be surprised at how often they were mentioned, though I could be mistaken about that.

(That said, a banal explanation might be that those who heard about EA just straightforwardly through the LessWrong forum, without any further detail, were less likely to write anything codable in the open comment box, compared to those who were specifically influenced by an event or HPMOR)

Comment by david_moss on EA Survey 2019 Series: How EAs Get Involved in EA · 2020-05-22T18:36:42.518Z · score: 23 (7 votes) · EA · GW

Thanks Jonas!

You can see the total EAs (estimated from year first heard) and the annual growth rate here:

As you suggest, this will likely over-estimate growth due to greater numbers of EAs from earlier cohorts having dropped out.

Comment by david_moss on Applying speciesism to wild-animal suffering · 2020-05-18T12:45:40.501Z · score: 3 (2 votes) · EA · GW

I occasionally see people make this kind of argument in the case of children, based on similar arguments for autonomy (see youth rights), though I agree that more people seem to find the argument that we should intervene convincing in the case of young children (that said, from the perspective of the activist who holds this view, this just seems like inappropriate discrimination).