Comment by david_moss on Age-Weighted Voting · 2019-07-15T16:21:19.587Z · score: 9 (6 votes) · EA · GW

There is already a proposal to use sortition to form a third legislative house, of citizens who would have responsibility for deliberating on whether legislation would harm future generations: Rupert Read's 'Guardians of the Future' (2012)

This seems more promising than re-weighting the value of votes of certain groups whose self-interest is presumed to lie more in the future given that i) voters tend not to vote much on the basis of self-interest, ii) to the extent that slightly younger generations have a greater interest in the future it is only in the very near future, which seems roughly equally compatible with disastrous policies iii) we have little, if any, reason to suppose that younger generations are epistemically capable of judging what policy would best serve their self-interest >50 years out.

The deliberative council idea has advantages over vote reweighting on all three counts: i) the citizens would be tasked explicitly with judging whether policies would aid or harm the future, rather than voting in whatever way in the hope that their vote proxies future interests, ii) they would be tasked with considering the long run future not just their self-interest (which extends maybe 50 years into the future, but which, due to time preference, might on average be a lot shorter), iii) such a deliberative council would have ample time and access to expertise (deliberative fora tend to give participants access to a variety of experts to help inform their deliberations) and be explicitly and implicitly (e.g. by the setup) to deliberate about what would produce the best interests- these kinds of setups have been widely used participants seem to tend to deliberate pretty well and reach relatively informed judgments (at least compared to the typical voter): see some case studies.

That said I think there may still be grounds to reject even this proposal, primarily that one may still be concerned about (iii) the epistemic question, even in these comparatively ideal circumstances.

Comment by david_moss on GiveWell's Top Charities Are Increasingly Hard to Beat · 2019-07-14T19:21:11.808Z · score: 5 (4 votes) · EA · GW

I'm not sure this is well-described as a "criticism of Givewell's methodology for estimating the effectiveness of their recommended charities." The problem seems to apply to cost-effectiveness estimates more broadly and the author explicitly says "Due to my familiarity with GiveWell, I mention it in a lot of examples. I don’t think the issues I raise in this post should be more concerning for GiveWell than other organizations associated with the EA movement". As such, I don't think these criticisms would make GiveWell's recommendations look more 'beatable.' Indeed, one might even think that it's partly because of considerations like those cited in the article you link, that GiveWell's top charities remain hard to beat, while other areas, which prima facie seemed like they would be extremely promising have turned out to be not so promising.

Comment by david_moss on How Europe might matter for AI governance · 2019-07-13T16:51:37.379Z · score: 1 (1 votes) · EA · GW

(I'd post the graphs here, but I don't think images can be inserted into comments.)

I think they can (or, at least, it used to be possible to do so). I've done so here and here for example.

Comment by david_moss on Please May I Have Reading Suggestions on Consistency in Ethical Frameworks · 2019-07-08T19:34:09.574Z · score: 9 (5 votes) · EA · GW

I hypothesise that internal consistency and agreement with our deepest moral intuitions are the two most important features of any ethical system. I'd like to hear suggestions of any other necessary and sufficient characteristics of a good ethical system. Does anyone have suggestions of books to read or thoughts to consider?

This is a pretty standard view in philosophical ethics, reflective equillibrium.

For a somewhat opposed approach, you might examine moral particularism (opposed to moral generalism), which roughly holds that we should make moral judgments about particular cases without (necessarily) applying moral principles, so while the particularist might care about coherence in some sense (when responding to moral reasons) they needn't be concerned with ensuring coherence between moral principles and between principles and our judgments about cases. You might separately wonder about how much weight should be given to our judgements about particular cases vs our judgements about general principles on a spectrum from hyper-particularism to hyper-methodism.

In terms of other characteristics of a good ethical system, I think it's worth considering that coherence doesn't necessarily get you very far. It seems possible, in principle, to have coherent views which are very bad (of course, this is controversial, and may depend in part on empirical facts about human moral psychology, alongside conceptual truths about morality). One might think that one needs an appropriate correspondence between one's (initial) moral views and the moral facts. Separately, one might think that it is more important to cultivate appropriate kinds of moral dispositions than to have coherent views.

Related to the last point, there is a long tradition of viewing ethical theorising (and in particular attempts to reason about morality) sceptically, especially associated with Nietzsche, according to which moral reasoning is more often rationalisation for more dubious impulses, in which case, again, one might be less concerned with trying to make one's moral views coherent and more in applying some other kind of procedure (e.g. a Critical or Negative one).

I am looking forward to trying to understand other people's ethical systems. Why do people make the decisions they do? What makes people change their minds? What allows people to ignore conflicting claims in their own beliefs?

There is a lot of empirical moral psychology on these questions. I'm not sure specifically what you're interested in, otherwise I would be able to make more specific suggestions.

Are there ways people perceive EA which are skin deep but nonetheless turn people off. eg a friend said "I don't think earning to give is good advice" even though most EAs today agree with this.

I think more applied messaging work about the reception of EA and receptivity to different messages would also be valuable to explore this and would likely help reduce risks which EAs run when engaging in outreach or conducting activities which are going to be perceived a certain way by the world.

Comment by david_moss on What new EA project or org would you like to see created in the next 3 years? · 2019-06-19T00:31:40.032Z · score: 4 (3 votes) · EA · GW

I didn't see the comment previously (until I saw these comments asking about why it hadn't been upvoted just now). I would speculate that one reason might be that the term 'Hackathon' is often associated just with programming or similar activities (which would be much less interesting than a general EA collective problem-solving event) and so people might have skimmed over it without reading the details.

Comment by david_moss on Might the EA community be undervaluing "meta-research on how to make progress on causes?" · 2019-06-16T23:33:50.931Z · score: 9 (7 votes) · EA · GW

It seems like much of what the Global Priorities Institute proposes in their research agenda falls into this category, under the general label of "cause prioritization" rather than "meta-research on how to make progress on causes". This area may still be neglected in absolute terms, but it seems like one of the areas most esteemed and valued within the EA community. Personally, I would, however, like to see more meta-research of the kind you describe focused on questions like whether the Importance, Tractability, Neglectedness Framework works well as a heuristic for selecting causes where activity is cost-effective and more generally on the psychological influences (e.g. cognitive biases and framing effects) on the cause prioritisation judgements that EAs and others make in practice, but that seems more a question of what meta-research would be valuable rather than whether more of it should take place.

Comment by david_moss on EA Mental Health Survey: Results and Analysis. · 2019-06-13T17:11:12.700Z · score: 10 (6 votes) · EA · GW

It seems like it might be worthwhile to compare these results to data from the Slate Star Codex survey (n=8171), which had a lot of data about mental health, as well as the 2016 LessWrong Survey (n=3083) which also had questions about this.

The reason why I think that this would be important, is because you note that reported rates of mental illness are substantially higher in your survey of EAs than in the general population, but we would also expect (and as it happens, the surveys confirm) that there are much higher rates of reporting formally diagnosed or self-diagnosed mental illness in those other survey samples as well. Notably EAs, LessWrongers and SSC readers differ from the general population in a host of potentially important ways (for example, many more elite university students or graduates, a population where there has been much concern about higher reported rates of mental illness). Given that, we might think that this is a problem that is not so distinctive of the effective altruism movement specifically.

One concern people might have about this approach is possible overlap between the survey samples and population, but at least the LessWrong and EA Surveys asked about membership of each community, (and in the case of the EA Survey at least, found that only around 20% of respondents were LWers), so one could examine the effect of this empirically and I would guess that it is not the case that, for example, higher rates of reported mental illness in the LessWrong Forum is largely driven by EAs in the LessWrong sample.

Comment by david_moss on EA Survey 2018 Series: How Long Do EAs Stay in EA? · 2019-05-31T23:31:03.311Z · score: 6 (4 votes) · EA · GW

32.4% of the 2018 EA Survey respondents had taken the pledge (see our post). As of 2018 it looks like GWWC had around just over 3000 members, which suggests we captured around 25-30% of the total membership (presumably a subset that is on average more engaged in EA). My impression is that many GWWC members were not particularly engaged with (and perhaps do not even identify with) effective altruism at all, so it's no surprise that the total number of Pledge takers within our sample of EAs is smaller than the total population of Pledge takers.

I'm not sure what the implication of suggesting that these are different populations is though? My observation was that the possibility that people simply "stop taking the EA census" doesn't seem to serve so well as an explanation of the dropoff that GWWC observe. Of course, it's possible that people are dropping out of the GWWC Pledge (or at least contact with GWWC checking in on their Pledge) for unrelated reasons to people disappearing from the EA Survey, though it seems likely that there is some overlap given the relationship between GWWC and EA, but it remains the case that people simply ceasing to complete the EA Survey can't explain away GWWC's similar rate of dropoff and so it remains a possible concern.

Comment by david_moss on EA Survey 2018 Series: How Long Do EAs Stay in EA? · 2019-05-31T20:54:53.595Z · score: 5 (3 votes) · EA · GW

That doesn't seem to apply so well as an explanation of the dropoff that GWWC found of a fairly similar magnitude. It seems less likely that people would find it too onerous to answer GWWC's (if I recall, fairly simple) request for confirmation that they are still meeting a formal pledge they took.

Comment by david_moss on EA Survey 2018 Series: Cause Selections · 2019-05-23T20:45:09.379Z · score: 1 (1 votes) · EA · GW

Thanks for your stimulating questions and comments Ishaan.

one might conclude that "climate change" and "global poverty" are more "mainstream" priorities, where "mainstream" is defined as the popular opinion of the populations from which EA is drawing. Would this be a valid conclusion?

This seems a pretty uncontroversial conclusion relative to many of the cause areas we asked about (e.g. AI, Cause Prioritization, Biosecurity, Meta, Other Existential Risk, Rationality, Nuclear Security and Mental Health)

Do people know of data on what cause prioritization looks like among various non-EA populations who might be defined as more "mainstream"

We don’t have data on how the general population would prioritize these causes- indeed, it would be difficult to gather such data, since most non-EAs would not be familiar with what many of these categories refer to.

We can examine donation data from the general population however. In the UK we see the following breakdown:

Charities Aid Foundation Giving Report 2018

As you can see, the vast majority of donor money is going to causes which don’t even feature in the EA causes list.

(For example, "College Professors" might be representative of opinions that are both more mainstream and more hegemonic within a certain group)

I imagine that college professors might be quite unrepresentative and counter-mainstream in their own ways. Examining (elite?) university students or recent graduates might be interesting (though unrepresentative) as a comparison, as a group that a large number of EAs are drawn from.

Elite opinion and what representatives from key institutions think seems like a further interesting question, though would likely require different research methods.

If EA engagement predicts relatively more support for AI relative to climate change and global poverty, I'm sure people have been asking as to whether EA engagement causes this, or if people from some cause areas just engage more for some other reason. Has anyone drawn any conclusions?

I think there are plausibly multiple different mechanisms operating at once, some of which may be mutually reinforcing.

  • People shifting in certain directions as they spend more time in EA/become more involved in EA/become involved in certain parts of EA: there certainly seem to be cases where this happens (people change their views upon exposure to certain EA arguments) and it seems to mostly be in the direction we found (people updating in the direction of Long Term Future causes, so this seems quite plausible
  • Differential dropout/retention across different causes: it seems fairly plausible that people who support causes which receive more official and unofficial sanction and status (LTF) would be more likely to remain in the movement than causes which receive less (and support for which is often implicitly or explicitly presented as indicating that you don’t really get EA. So it’s possible that people who support these other causes drop out in higher numbers than those who support LTF causes. (I know many people who talk about leaving the movement due to this, although none to my knowledge have).
  • There could also be third factors which drive both higher EA involvement and higher interest in certain causes (perhaps in interaction with time in the movement or some such). Unfortunately without individual level longitudinal data we don’t know exactly how far people are changing views vs the composition of different groups changing.
Comment by david_moss on Why do EA events attract more men than women? Focus group data · 2019-05-21T20:55:39.608Z · score: 7 (6 votes) · EA · GW

I'd say that if we're testing people's attitudes towards certain ideas, in order to discern their attitudes towards effective altruism, then those ideas should actually be indicative of effective altruism- which seems exceedingly uncontroversial.

I wouldn't say "it's not possible to tell... unless we have a survey" because you could use various other methods. For example, looking at people's donation preferences, looking at the population of those who seem to be interested in EA and seeing how many are men and how many are women, and you could examine the attitudes of non-EA without a survey e.g. via interviews, though each of these methods would have their own limitations.

Comment by david_moss on Why do EA events attract more men than women? Focus group data · 2019-05-21T20:19:16.747Z · score: 22 (11 votes) · EA · GW

I think you'd need to operationalise EA in a way that captures the ideas that are actually distinctive of EA i.e. using evidence and reason to do the most good you can do (or some such).

The main challenge in doing this with non-EA participants is that most expressions of this core EA idea are easily read platitudinously rather than as actually expressing the ideas that EAs hold. i.e. most people nominally believe in "evidence and reason" and doing "the most good." The fact that Kagan and Fitz found quite high and wide-ranging support for the ideas described in their survey, while EA continues to have little support is perhaps suggestive of this mismatch. I've seen similar mismatch in early SHIC surveys as well as surveys for other EA orgs, where putative expressions of EA ideas receive near-maximum levels of support, despite respondents not supporting any of their implications.

I think a good operationalisation of EA ideas (which would have to be tested empirically of course), would include some explicit (strictly posed) statements of EA ideas, like doing the most good you can, along with other factors which seem implicitly related to EA, and stricter statements of which actions people would support. A few things which should probably be included would be:

  • Impartial maximisation
  • Strict cosmopolitanism (including, potentially, species, the far future etc. not just standard global poverty charities)
  • True cause neutrality (including some reverse scored items about whether people would prefer certain causes/charities, even if they weren't the most cost-effective/weren't intuitively appealing (see: Berman et al., 2018. Similarly, whether people would, in principle, endorse supporting those in the distant future rather than those in need now)
  • One might want to include some further questions about epistemics/deliberation to see how people's endorsement of "evidence and reason" cashes out (i.e. do they actually endorse any epistemic principles that would be recognisable as or compatible with EA).

Naturally this is all unavoidably tied up with controversial questions about what counts as "EA"- though I think something like this would be necessary to actually discern what people's views are about EA. I would guess (conservatively) that >50% of people who endorse something like the Kagan-Fitz operationalisation would strongly object to multiple ideas that are core to EA.

Comment by david_moss on Why do EA events attract more men than women? Focus group data · 2019-05-21T19:45:46.083Z · score: 15 (7 votes) · EA · GW

Neither of those pieces of evidence seem to count against the proposition that more men are interested in EA than women.

The focus groups were presumably speaking to men and women who were already interested in EA, not examining the broader population. It therefore can't speak at all to whether there are more men than women interested in EA.

Kagan and Fitz's linked study, while interesting, does not in my view tell us anything about relative levels of interest in EA. It seems to tell us about which people have general pro-charity attitudes and are vaguely cosmopolitan (in an extremely watered down sense e.g. being more willing to donate to international rather than domestic charities).

Edit: given that the focus groups intentionally selected men and women who were likely among the most interested (based on selection for highest attendance), it seems particularly clear that they could not tell us anything about this question.

Comment by david_moss on Announcing EA Hub 2.0 · 2019-04-09T18:12:34.695Z · score: 4 (3 votes) · EA · GW

It might also be useful to ping local leads if a new person registers and indicates that they haven't attended a local group event.

Thanks! We previously had an option for people taking the EA Survey to opt-in to be informed if there was a nearby local group/put in contact with the organiser. We'll definitely consider including this in future surveys. Allowing people to indicate specifically which local group they attend also sounds potentially useful.

Comment by david_moss on Announcing EA Hub 2.0 · 2019-04-09T01:42:39.360Z · score: 7 (5 votes) · EA · GW

Thanks for asking. Yes, searching by cause area, availability for hiring, volunteering or speaking, and by specific skills are all expected to be added to the Hub within a matter of weeks.

Comment by david_moss on SHOW: A framework for shaping your talent for direct work · 2019-03-13T21:17:48.203Z · score: 21 (11 votes) · EA · GW

I'm broadly sympathetic to this view, though I think another possibility is that people want to maximise personal impact, in a particular sense, and that this leads to optimising for felt personal impact more than actually optimising for amount of overall good produced.

For example, in the context of charitable donations, people seem to strongly prefer that their donation specifically goes to impact producing things rather than overhead that 'merely' supports impact producing things and that someone else's donation goes to cover the overhead. (Gneezy et al, 2014) But, of course, in principle, these scenarios are exactly functionally equivalent.

In the direct work case, I imagine that this kind of intrinsic preference for specifically personal impact, a bias towards over-estimating the importance of impact which an individual themselves brings about and signalling/status considerations/extraneous motivations may all play a role.

Comment by david_moss on Identifying Talent without Credentialing In EA · 2019-03-13T19:06:37.586Z · score: 10 (5 votes) · EA · GW

I can't reply on behalf of Peter, but I would imagine the following:

  • Individuals at companies choose to hire for reasons other than expected performance (e.g. it's a publicly defensible decision to hire someone with recognized credentials and track record, whereas it's not publicly defensible to hire someone who lacks those but who otherwise seems like they'd perform really well). See general discussion of the signalling value of education.
  • Individuals at companies are bad at hiring for expected performance: e.g. relying on things which the evidence suggests don't predict job performance well (such as subjective impressions in an unstructured interview) and (possibly) credentials.
  • Many companies in the world can in theory hire people with 10 year track records doing similar roles in similar companies. People hiring for EA researcher roles typically can't find anyone with a 10 year track record in similar work- and even if you relax the assumptions somewhat, can still find far fewer people with any kind of track record in similar work.
  • The competencies Peter is hiring for may be more test-taskable than many that companies are hiring for. e.g. creating a cost-effectiveness model may be a better predictor of performance at creating cost-effectiveness models than the best available test tasks for "be an executive" or "manage HR."
Comment by david_moss on EA Survey 2018 Series: Donation Data · 2019-03-09T00:01:37.571Z · score: 4 (3 votes) · EA · GW

Thanks for your comment Elizabeth.

The axis was just mislabelled (one missing 0). We updated the graph to fix that.

As to the trendline, we just used a line of best fit, which assumes a linear relationship. The low R^2 (~30%) of this linear Donations~Income regression explains why it "looks a bit weird". It was used as an easy to interpret visual that depicted a simplified relationship between income and donations but one which demonstrated the correct direction of effect. This does have the disadvantage of being prone to overfitting, and as we noted "there are some large outliers driving this very strong relationship". We might expect a better fit for a nonlinear relationship, however, the later analysis with differing linear responses for different donor groups, was a reasonable fit.

Comment by david_moss on EA Survey 2018 Series: Where People First Hear About EA and Influences on Involvement · 2019-02-23T18:59:04.347Z · score: 6 (4 votes) · EA · GW

Thanks Ben!

Yes, I think next year one thing we will likely do is include some questions tracking EA knowledge (of specific concepts), which we included in the last Local Groups Survey, but have not included in an EA Survey thus far, as well as some other measures like "Have you read [such and such book]."

Something else we looked at, but which didn't really fit into the above post anywhere, was where supporters of AI/Long-Term Future causes in general first hear about EA. We basically just found that a very disproportionately large number first came from LessWrong (26% of AI supporters, compared to 12% of the sample as a whole), and not much difference anywhere else (except commensurately lower percentages from other sources). For Poverty (as you might expect) there was pretty much just the converse picture but with smaller differences.

Image can be viewed in full size if opened in a new tab or one can use the direct link here

Comment by david_moss on Open Thread: What’s The Second-Best Cause? · 2019-02-20T17:49:33.208Z · score: 23 (11 votes) · EA · GW

Interesting question!

EA Survey Cause Selection data somewhat speaks to this. One difference is that we didn't do forced ranking on the cause prioritisation scale, e.g. people could rate more than one charity as "near top priority," but we can still compare the % of people who selected each cause as "near top priority" (the second highest ranking that could be given).

Below I show what % of people selected each cause as "near top" priority for those who selected AI, Poverty or Animal Welfare as "top priority" (I could do this for the other causes on request).

As you might expect, people who rate AI as top are more inclined to rate other LTF/x-risk causes as near top priority and more people who rate Poverty as top, rate Climate Change as near top (these tended to follow similar patterns in the analyses in our main report on this). Among people who selected Animal Welfare as top, the largest number selected Poverty as near top priority.

Notably Biosecurity appears as the cause most selected as "near top" by AI advocates and the second most selected cause for those who rate Poverty top. This is in line with the results discussed in the main post where Biosecurity received the highest % of "near top" ratings of any cause (slightly higher than Global Poverty) though very low numbers of "top priority" ratings, meaning that it is only middle of the pack (5/11) in terms of "top or near top priority" ratings

Comment by david_moss on EA Survey 2018 Series: Where People First Hear About EA and Influences on Involvement · 2019-02-19T19:22:27.938Z · score: 1 (1 votes) · EA · GW

Thanks! Fixed (I hope).

Comment by david_moss on Does giving to charity make it more likely you‘ll be altruistic in future? · 2019-02-14T22:48:24.762Z · score: 1 (1 votes) · EA · GW

That seems like one example that would fall within this, yes.

Comment by david_moss on Does giving to charity make it more likely you‘ll be altruistic in future? · 2019-02-14T21:08:46.933Z · score: 5 (5 votes) · EA · GW

This (2015) review reports that:

individuals are more likely to exhibit consistency when they focus abstractly on the connection between their initial behavior and their values, whereas they are more likely to exhibit licensing when they think concretely about what they have accomplished with their initial behavior—as long as the second behavior does not blatantly threaten a cherished identity

So broadly speaking I would expect the act(s) in question making public (or just making privately salient to you) a particular moral identity (as a person who acts well) would increase moral consistency effects, whereas acts which emphasis the amount of good you have done would increase licensing effects.

Comment by david_moss on The Narrowing Circle (Gwern) · 2019-02-12T21:37:20.126Z · score: 16 (9 votes) · EA · GW

Much discussion of Moral Circle Expansion seems hampered by lack of conceptual clarity about what the Moral Circle means.

There are a lot of distinctions that need to be drawn, but here are two positions on one dimension:

  1. The moral circle merely refers to which (groups or types of) entities are viewed as possible targets of moral regard
  2. The moral circle refers to the amount of actual moral concern granted to such entities

A lot more distinctions should be drawn on this dimension alone (e.g. for "actual moral concern" are we interested in abstract attitudes of concern, actual amount of effort extended, or actual treatment extended), but even these suffice for now.

On the first view, which seems somewhat closer to original uses of the term, it does seem like retrenchment of the Moral Circle should be expected to be quite rare, at least once you reach contexts like our own (in WEIRD societies) where there are extremely prevalent memes about at least potentially considering entities as possible moral targets if they might be persons in any sense (or more generally in contexts where the conditions for considering the possibility of including some group in the moral circle are as extensive and plural as they are now). It seems relatively hard for groups to fall entirely out of the moral circle in the first sense, in such cases, except in cases like those you mention where we decide that certain entities don't exist or aren't sentient.

With the more expansive second sense of Moral Circle (which seems to be what people are using), where all that is required for Moral Circle expansion/retraction is an increase or reduction in moral concern extended (as seems to be implied by examples such as more/less care being granted to the elderly and so on), it seems like the Moral Circle should be expected to be expanding and retracting near constantly on an individual or group basis. This is especially so if we understand degree of moral concern as meaning the actual extend to which needs are weighted and help extended (in which case this will, almost necessarily, be pervaded by tradeoffs in a near zero sum fashion) which is why further distinction being drawn within this category is so important.

EA Survey 2018 Series: Where People First Hear About EA and Influences on Involvement

2019-02-11T06:05:05.829Z · score: 34 (17 votes)

EA Survey 2018 Series: Group Membership

2019-02-11T06:04:29.333Z · score: 34 (13 votes)
Comment by david_moss on Disentangling arguments for the importance of AI safety · 2019-01-23T17:34:59.672Z · score: 3 (3 votes) · EA · GW

And this proliferation of arguments is (weak) evidence against their quality: if the conclusions of a field remain the same but the reasons given for holding those conclusions change, that’s a warning sign for motivated cognition (especially when those beliefs are considered socially important).

I'm not sure these considerations should be too concerning in this case for a couple of reasons.

I agree that it's concerning where "conclusions... remain the same but the reasons given for holding those conclusions change" in cases where people originally (putatively) believe p because of x, then x is shown to be a weak consideration and so they switch to citing y as a reason to believe y. But from your post it doesn't seem like that's necessarily what has happened, rather than a conclusion being overdetermined by multiple lines of evidence. Of course, particular people in the field may have switched between some of these reasons, having decided that some of them are not so compelling, but in the case of many of the reasons cited above, the differences between the positions seem sufficiently subtle that we should expect cases of people clarifying their own understanding by shifting to closely related positions(e.g. it seems plausible someone might reasonably switch from thinking that the main problem is knowing how to precisely describe what we value to thinking that the main problem is not knowing how to make an agent try to do that).

It also seems like a proliferation of arguments in favour of a position is not too concerning where there are plausible reasons why should expect multiple of the considerations to apply simultaneously. For example, you might think that any kind of powerful agent typically presents a threat in multiple different ways, in which case it wouldn't be suspicious if people cited multiple distinct considerations as to why they were important.

Comment by david_moss on EA Survey 2018 Series: Cause Selections · 2019-01-19T18:07:58.311Z · score: 7 (3 votes) · EA · GW

I think you can get a very rough sense of possible changes by comparing the results from different years (as in the first two graphs in the post), but given the difficulties in interpreting these differences I would be wary of presenting these as % changes. Aside from possible differences in the sample across different years, changing categories for causes would also obviously distort things (we start with a fairly strong presumption against changing categories for this reason, but in some cases, the development of Mental Health as a field being one, it's unavoidable).

Comment by david_moss on EA Survey 2018 Series: Cause Selections · 2019-01-19T16:52:01.075Z · score: 4 (3 votes) · EA · GW

Yeh, I certainly think this would be valuable, although it would need to be weighed against the fact that we already have more than 10 causes listed, which may be pushing it. We may be able to accommodate this by splitting out the questions into questions about broader cause areas and then about more specific causes.

EA Survey 2018 Series: Cause Selections

2019-01-18T16:55:31.074Z · score: 65 (25 votes)
Comment by david_moss on EA Survey 2018 Series: Donation Data · 2018-12-28T18:48:23.475Z · score: 1 (1 votes) · EA · GW

Thanks for the suggestion! That seems likely to be at least one of the things that is being picked up by the 'financial constraint' responses.

Comment by david_moss on EA Survey 2018 Series: Donation Data · 2018-12-10T23:34:58.338Z · score: 3 (3 votes) · EA · GW

Thanks!

  1. Were income numbers pre or post-tax?

All pre-tax.

  1. Do you have a number for average earnings of non-students who are earning to give?$52,000 is a pretty low number for that category.

The numbers are likely lowered (as they were elsewhere) by a lot of fairly new, lower earning/donating people, who are just starting out on that career path. Median donations for (non-student) E2G were $3000 and $70,000 income. Only above the 63rd percentile in this category were people earning more than $100,000.

  1. How did the survey define the difference between "earning to give" and "other", if at all?

These were just fixed response options without additional definition.

Comment by david_moss on EA Survey 2018 Series: Donation Data · 2018-12-10T19:37:54.650Z · score: 1 (1 votes) · EA · GW

Thanks Greg. These were selected a priori (though informed by our prior analyses of the data).

Due to missing data there was some difficulty doing stepwise elimination with the complete dataset. We've added a model including all interactions to the regression document. This had a slightly better AIC (3093 vs 3114).

Comment by david_moss on EA Survey 2018 Series: Donation Data · 2018-12-10T17:57:59.130Z · score: 1 (1 votes) · EA · GW

The people who selected 'research' were disproportionately students compared to the other categories. Excluding all students across categories, 251 people selected research, and median income and donations were still significantly lower.

Comment by david_moss on EA Survey 2018 Series: Donation Data · 2018-12-10T17:48:34.622Z · score: 2 (2 votes) · EA · GW

Thanks. Updated.

EA Survey 2018 Series: Donation Data

2018-12-09T03:58:43.529Z · score: 81 (36 votes)
Comment by david_moss on EA Survey Series 2018 : How do people get involved in EA? · 2018-12-02T00:23:56.419Z · score: 3 (2 votes) · EA · GW

Thanks Ben. Yeh, this is 3 people in 2009 and 3 people in 2010 (out of 2473 responses to these questions overall). There are a handful of similar errors for Doing Good Better. Every year, there are a few people who seem to get the years wrong in this way (alongside a lot of responses saying explicitly that they don't remember).

Anecdotally, (both in the survey and elsewhere) I find a surprising number of people confuse CEA, 80K and GWWC (not to mention, Rethink Charity, its various projects and Charity Science).

Comment by david_moss on From humans in Canada to battery caged chickens in the United States, which animals have the hardest lives: results · 2018-11-30T01:32:56.863Z · score: 3 (3 votes) · EA · GW

Thanks Ben! Corrected: we certainly agree that there are many more bugs than fish factory farmed fish.

Comment by david_moss on EA Survey Series 2018: Subscribers and Identifiers · 2018-11-26T22:36:32.387Z · score: 7 (4 votes) · EA · GW

Agreed. A per my reply to you here we're still going to talk about the influence of different levels of involvement with regards to cause selection and in a post addressing your question about levels of involvement and different routes by which people get involved in EA.

Comment by david_moss on EA Survey Series 2018: Subscribers and Identifiers · 2018-11-26T22:25:07.706Z · score: 3 (2 votes) · EA · GW

Thanks Ben. I totally agree and we're going to go into this a lot more in the Cause Preference post.

Comment by david_moss on EA Survey Series 2018 : How do people get involved in EA? · 2018-11-23T16:36:53.625Z · score: 2 (2 votes) · EA · GW

Thanks Ben, this seems like a great suggestion. We're going to be talking about different levels of engagement a lot in the subsequent series of the post (with regards to cause preference in particular), but will make sure to put together an analysis on this specifically.

Comment by david_moss on Is EA Growing? Some EA Growth Metrics for 2017 · 2018-11-21T19:47:56.872Z · score: 3 (2 votes) · EA · GW

the sum of local EA group facebook numbers might serve as a good proxy for the size of the EA movement per se

This is an interesting proposition, but one thing that will limit its usefulness, I think, is that lots of EAs are members of multiple local group Facebook groups, presumably either to show support or out of interest in their content. Aside from that, many members of the online groups appear to be not engaged in the local EA community (and perhaps not really engaged in EA at all): for example, EA London has around 2000 members of the Facebook group but many fewer people who are actively engaged and attend events and so.

It could also be combined with other metrics e.g. with impact (e.g. GiveWell donations, 80k Hours career changes) to assess communications effectiveness. with time (of the local EA team) and cost invested to achieve to assess operational effectiveness. If compared across local groups, the metrics would highlight local success stories and potentially where groups might need help.

The Local Groups Survey did this to some extent: measuring (self-reported) number of group members, Pledges, Career changes, funds raised or donations influenced, among other things. We don't publicly release a breakdown of particular groups, of course, but we did look at the correlations between different variables and performance on different metrics. As you'd expect there was a fairly good correlation between success on different metrics, though with plenty of exceptions. I agree it would be valuable to have more systematic investigation of group performance of this kind to identify trends and where things seem to be working particularly well or not well.

Comment by david_moss on EA Survey Series 2018 : How do people get involved in EA? · 2018-11-20T21:50:08.517Z · score: 6 (3 votes) · EA · GW

Hi Ben,

Thanks for the comment!

We will definitely keep refining the categories year-by-year depending on which seem more or less significant.

“The Sam Harris and Joe Rogan podcasts were done by Will as part of the promotion campaign for DGB while he was working at CEA/80k so could arguably be coded as DGB/CEA/80k”

I entirely agree that Will’s podcasts can be counted as CEA/80K/DGB/Will in terms of assigning credit.

I’m not sure what that tells us about how we ought to design the survey though. In terms of coding open comment responses: as I noted, how to classify people’s open comment answers is somewhat fuzzy, many comments ambiguously fit multiple different options and those numbers can’t be directly compared to the fixed category responses. Your podcast case is a perfect example since, as you note, if someone writes in “Joe Rogan Podcast” it could be classified as ‘DGB’, ‘CEA’, ‘80K’, ‘Podcast’ or ‘Will’ (or depending on our interests, we could code it as ‘mass media outreach’ or ‘online’ and so on). I could go back through the open comments and try to code things as CEA or 80K related (per your specification), but it will be a pretty vague and heterogeneous category with a lot of edge cases like these, that will make it hard to interpret. Explicit references to existing EA orgs or Will etc. were coded as such.

Also, crucially, a lot of the comments weren’t specific enough to code in that way. In the specific case of Sam Harris podcasts, for example, at least one person mentioned the Peter Singer episode (no-one explicitly mentioned Eliezer’s AI episode that I saw, but in principle some people might have first heard through that one), which means technically we can’t code every reference to Sam Harris’ podcast as Will’s- though I think it’s entirely reasonably of you to assume that many of the non-specific comments were referring to Will’s. In terms of refining the categories: note that DGB and 80K were already available as options and respondents chose to select “Other” and write in a response anyway. It would be difficult to change the fixed categories, in such a way that people who would otherwise write “Joe Rogan podcast” would instead select “CEA”/“80K” etc. as a category. For one thing, people need to know (and remember) that, when they hear Will on a podcast, this should be counted as 80K or CEA or as part of the marketing for DGB or whichever category. They may also just feel that ‘Other: Podcast’ better captures their case than DGB or 80K. I think it would be difficult to specify all the things that could reasonably be counted as CEA’s work in a given fixed option (e.g. ‘80,000 Hours, inc. Doing Good Better, 80K Podcasts, Will MacAskill podcasts etc.’), but we’ll continue reviewing the results and trying to think of the best options.

I agree that in light of this year’s results, including Podcast as its own category next year may well make sense. We could also split out Book/Article etc. into multiple options. Though doing this might actually give us less information, in terms of attributing responses to CEA/80K/Will etc., as if people just select ‘Podcast’ as a fixed response, rather than writing in an ‘Other’ response, we won’t know which Podcast it was. We could include/require an open comment to specify further in addition to fixed responses, but requiring open comment would be significantly more onerous for respondents and might reduce response rate, so it’s a tricky balance. Alternatively we could include more fixed options (Podcast: Will; Podcast: 80K/Rob Wiblin; Podcast: Other, and so on), and ditto for other specific books, websites and so on (so far, Doing Good Better is the only one we’ve extended this treatment to), but of course that might make the question too unwieldy.

“Presumably some of the books / articles / talks are also other materials produced by the organisations or press coverage they sought out - does that seem right”

I definitely agree this is right in terms of assigning credit: no doubt EA orgs such as CEA should claim a lot of credit for vicariously bringing about lots of other outcomes (e.g. lots of Personal Contacts are presumably influenced by the prior work of EA orgs), but again, I don’t think there’s a way to capture that in people’s First Heard responses. And again, when open responses explicitly referred to an EA org, then it was coded accordingly, as well as coding it as ‘Book’ or ‘Article.’.

“Likewise, maybe 'search' and 'facebook' should be removed as categories, because they're channels you use to find the other content listed. Presumably everyone who found out about EA through 'facebook' likely saw a post by a friend, so should be a personal referral, or saw a post by one of the orgs, so should be coded as an org.”

I think it’s pretty plausible that they should be removed next year (given the relatively small numbers attached to them). Although it doesn’t necessarily follow that the answers could neatly be split off into “personal contact” or “an EA org” because people may remember that they saw something on Facebook, but not which org was responsible (a lot of comments were pretty vague like this): so getting rid of “search” and “facebook” would probably mean a lot more “Other” responses.

“I'm also surprised to see https://www.effectivealtruism.org/ isn't listed”

We’d certainly be open to including this website if you are particularly interested in it (although then there’s a question of which other specific EA websites should or shouldn’t be included ). For what it’s worth we didn’t receive a single “Other” response referring to it, that I saw, but maybe people classified this as “Search” or something else.

EA Survey Series 2018 : How do people get involved in EA?

2018-11-18T00:06:12.136Z · score: 48 (27 votes)
Comment by david_moss on Cross-post: Think twice before talking about ‘talent gaps’ – clarifying nine misconceptions, by 80,000 Hours. · 2018-11-14T19:17:58.866Z · score: 8 (5 votes) · EA · GW
What’s more, the costs of raising salaries to attract new staff are often large because in order to be fair, you may also need to raise salaries for existing staff. For instance, if you have 10 equally-paid staff, and raise salaries 10% to attract one extra person, that final staff member effectively costs double the average previous salary.

This also seems to have an impact on other orgs. I have lost count of the number of times I have heard people refer to "starting salaries at [large EA org with a >$1,000,000 annual budget]" as a baseline for salary expectations. This clearly has a disproportionately negative effect on smaller EA orgs or those trying to run more cheaply.

Comment by david_moss on On 'causes' · 2014-06-25T19:08:00.000Z · score: 0 (0 votes) · EA · GW

The question of whether "sub-goal" x is the "simplest" or best "proxy" for our more ultimate goals doesn't seem particularly useful and can be highly misleading, as in the example you chose. You conclude that promoting animal welfare is very probably not the best cause (because promoting empathy probably dominates it as a proxy), whereas we can't say the same for promoting human welfare. But it could still be the case that promoting animal welfare is a better proxy than human welfare for far future flourishing, even though there's a yet better intermediary in the case of animal welfare. The problem is that multiple descriptions of causes can be described, and we can generate multiple conflicting but practically uninformative statements about proxies and causes.