I am the Principal Research Manager at Rethink Priorities working on, among other things, the EA Survey, Local Groups Survey, and a number of studies on moral psychology, focusing on animal, population ethics and moral weights.
In my academic work, I'm a Research Fellow working on a project on 'epistemic insight' (mixing philosophy, empirical study and policy work) and moral psychology studies, mostly concerned either with effective altruism or metaethics.
I've previously worked for Charity Science in a number of roles and was formerly a trustee of EA London.
I personally think the EA community could plausibly grow 1000-fold compared to its current size, i.e. to 2 million people, which would correspond to ~0.1% of the Western population. I think EA is unlikely to be able to attract >1% of the (Western and non-Western) population primarily because understanding EA ideas (and being into them) typically requires a scientific and prosocial/altruistic mindset, advanced education, and the right age (no younger than ~16, not old enough to be too busy with lots of other life goals). Trying to attract >1% of the population would in my view likely lead to a harmful dilution of the EA community.
While we're empirically investigating things, it seems like what proportion of the population seem like they could potentially be aligned with EA, might also be a high priority thing to investigate.
I wonder whether this alters the calculus for whether to give to donor lotteries (as opposed to EA Funds)?
Four months ago, it seemed like donating to the donor lottery was being recommended as a kind of default (unless the donor had a particularly cool and unusual idea for where to donate). I speculated that it might be better for a lot of donors to just donate to the Funds, resulting in the money being allocated by the fund managers rather than whoever won the lottery[^1]. It seemed at the time that the response was fairly sanguine about the possibility that individual donors (e.g. lottery winners) might make better allocations than the fund managers.
If we thought that the EA Funds are quite well-funded relative to the potential projects available to fund, we might be more inclined to think this is true (since the lottery winner can, in theory, seek out more promising opportunities). If, however, EA Funds are relatively under-funded, and can't fund many promising opportunities available to them, then it might seem better to just encourage people to donate to the funds by default (unless, perhaps, they are particularly confident that they or others could beat the fund managers with more time to reflect).
One might argue that it would be better for people to donate to the lottery even when the Funds are very underfunded, because whoever the winner is can make a judicious decision (potentially advised by the Fund managers) about whether they should just donate to the Funds or not. As I noted at the time, I'm a little worried that lottery winners might be biased against just donating their winnings back to the Fund. And, more generally, one might wonder about why the lottery winner would be expected to make a better decision about that question than the fund managers themselves. There may also be other advantages to people donating directly to the funds if they are under-funded (e.g. perhaps grants can be made more quickly via people donating directly to the funds, than via the lottery winner conducting their own investigations and possibly choosing to donate to the funds, or perhaps funding decisions can be made more reliably, if the funds have a more predictable amount of money coming in, rather than a large pool of money possibly going to them, possibly being donated to projects they would recommend and possibly being donated elsewhere), but of course I don't know about whether any of those practical details hold.
Though to be clear I also speculated that it could be better for people to make individual donation decisions, rather than to donate to the lottery, if this lead to more investigation, experimentation and knowledge generation from a larger number of more engaged individuals.
It seems like the core issue here is that even though in a certain sense it would be good if any time you were sitting not doing anything, you were instead going off to improve the world in some way, in practice, endeavouring to do this would not be possible and/or would be counterproductive. For one thing, you need rest, so if you were to always (or just too often) try to do something productive rather than sitting around, you'd eventually fail/become less productive overall. More generally, it seems like, in the long run, you may well do more good by focusing on the highest important things (e.g. college work and your long-term career), rather than spending all available time now on direct impact.
Even more generally, it seems like the approach of worrying about whether, at each specific moment, you could be doing a higher priority thing, is stressing you out to a clearly counter-productive extent (i.e. you explicitly note that worrying about this is making you less likely to do anything productive). If so the best thing to do from a utilitarian perspective is not to try to calculate what the best thing to do in each given moment is and try to do it, but to take a more meta- approach of working out what kind of strategy will work to maximise the utility you produce in the long-run (see discussion of two-level utilitarianism). People face analogous issues in the context of deciding how to spend money rather than time: many aiming for a high level of frugality find that trying to work out for every small purchase whether it's utility-maximising (even allowing considerations like "If I don't buy myself an ice cream on this occasion, I will go mad with unhappiness in the long run, so I will buy the ice cream") is too stressful for them to maintain in the long run, so they establish a more fixed rule that they will donate X, and everything left over they can just spend however they like.
DM: While I've no doubt that many of the groups that have been founded by people who joined since 2015*, I suspect that even if we cut those people out of the data, we'd still see an increase in the number of local groups over that time frame- so we can't infer that EA is continuing to grow based on increase in local group numbers.
BW: It sounds like maybe when you say "we can't infer that EA is continuing to grow based on increase in local group numbers" you mean "part of the growth might be explained by things other than what would be measured by a change in number of groups"? (Or possibly "increasing group numbers is evidence of growth since 2015, but not necessarily evidence of growth since, say, 2019"?)
I meant something closer to: 'we can't infer Y from X, because we'd still expect to observe X even if ¬Y.'
My impression is still that we have been somewhat talking past each other, in the way I described in the second paragraph of my previous comment. My core claim is that we should not look at the number of new EA groups as a proxy for growth in EA, since many new groups will just be a delayed result of earlier growth in EA, (as it happens I agree that EA has grown since 2015, but we'd see many new EA groups even if it hadn't). Whereas, if I understand it, your claim seems to be that as we know that at least some of the new groups were founded by new people to EA, we know that there has been some new EA growth.
Empirical research on people's responses to the term (and alternative terms) certainly seems valuable, and important to do before any potential rebrand.
Anecdotally, I find that people hate reference to "priorities" or "prioritising" as much or more than they hate "effective altruism." Referring to specific "global priorities" quite overtly implies that other things are not priorities. And terminology aside, I find that many people outright oppose "prioritisation" in the field of philanthropic or pro-social endeavours for roughly this reason: it's rude/inappropriate to imply that certain good things that people care about are more important than others. (The use of the word "global" just makes this even worse: this implies that you don't even just think that they are local or otherwise particular priorities, but rather that they are the priorities for everyone!)
I'm not sure where you are disagreeing, because I agree that many people founding groups since 2015 will in fact have joined the movement later than 2015. Indeed, as I show in the first graph in the comment you're replying to, newer cohorts of EA are much larger than previous cohorts, and as a result most people (>60%) in the movement (or at least the EA Survey sample[^1]) by 2019 are people who joined post-2015. Fwiw, this seems like more direct evidence of growth in EA since 2015 than any of the other metrics (although concern about attrition mean that it's not straightforward evidence that the total size of the movement has been growing, merely that we've been recruiting many additional people since 2015).
My objection is that pointing to the continued growth in number of EA groups isn't good evidence of continued growth in the movement since 2015 due to lagginess (groups being founded by people who joined the movement in years previous). It sounds like your objection is that since we also know that some of the groups are university groups (albeit a slight minority) and university groups are probably mostly founded by undergraduates, we know that at least some of the groups founded since 2015 were likely founded by people who got into EA after 2015. I agree this is true, but think we still shouldn't point to the growth in number of new groups as a sign of growth in the movement because it's a noisy proxy for growth in EA, picking up a lot of growth from previous years. (If we move to pointing to separate evidence that some of the people who founded EA groups probably got into EA only post 2015, then we may as well just point to the direct evidence that the majority of EAs got into EA post-2015!)
[^1]: I don't take this caveat to undermine the point very much because, if anything I would expect the EA Survey sample to under-represent newer, less engaged EAs and over-represent EAs who have been involved longer.
I think this applies to growth in local groups particularly well. As I argued in this comment above, local groups seem like a particularly laggy metric due to people usually starting local groups after at least a couple of years in EA. While I've no doubt that many of the groups that have been founded by people who joined since 2015*, I suspect that even if we cut those people out of the data, we'd still see an increase in the number of local groups over that time frame- so we can't infer that EA is continuing to grow based on increase in local group numbers.
*Indeed, we should expect this because most people currently in the EA community (at least as measured by the EA Survey) are people who joined since 2015. In each EA survey, the most recent cohorts are almost always much larger than earlier cohorts (with the exception of the most recent cohort of each survey since these are run before some EAs from that year will have had a chance to join). See this graph which I previously shared, from 2019 data, for example:
(Of course, this offers, at best, an upper bound on growth in the EA movement, since earlier cohorts will likely have had more attrition).
do you think it's likely that there are about as many group members as before, spread across more groups? Or maybe there are more group members, but the same number of total people engaged in EA, with a higher % of people in groups than before?
There's definitely been a very dramatic increase in the percentage of EAs who are involved in local groups (at least within the EA Survey sample) since 2015 (the earliest year we have data for).
In EAS 2019 this was even higher (~43%) and in EAS 2020 it was higher still (almost 50%).
So higher numbers of local group members could be explained by increasing levels of engagement (group membership) among existing EAs. (One might worry, of course, that the increase in percentage is due to selective attrition, but the absolute numbers are higher than 2015 as well.)
Unfortunately we don't have good data on the number of local group members, because the measures in the Groups Survey were changed between 2019-2020. On the one measure which I was able to keep the same (total number of people engaged by groups) there was a large decline 2019-2020, but this is probably pandemic-related.
David Moss shares this chart saying "I fear that most of these metrics aren't measures of EA growth, so much as of reaping the rewards of earlier years' growth... looking at years in EA and self-reported level of engagement, we can see that it appears to take some years for people to become highly engaged".
I have a different interpretation, which is that less engaged people are much more likely to churn out of the movement entirely and won't show up in this data.
Thanks for quoting me, though you cut out the bit where I say:
we can see that it appears to take some years for people to become highly engaged. (Although, of course, this is complicated by potential attrition, i.e. people who aren't engaged dropping out of earlier cohorts. We'll talk more about this in this year's series).
That said, while differential attrition is a serious problem (particularly in the earlier cohorts), I think it remains clear that people typically take some years to become highly engaged. Clearly very, very few people are highly engaged in their first year or so of EA involvement (only about 5%, or 10 people were highly engaged from the 2019 cohort in 2019). If EA were gaining highly engaged EAs only at that rate, (with the percentage of engaged EAs increasing only due to less engaged EAs dropping out) we'd be in a very poor state, gaining only a handful of engaged EAs per year. It also doesn't accord with the raw numbers of highly engaged EAs in each of the cohorts: there were 3x as many highly engaged EAs in the 2018 cohort as the 2019, twice as many highly engaged EAs in the 2017 cohort as the 2018 cohort and about 30% more in 2016 as in 2017. And total cohort size hadn't been decreasing dramatically over time time frame either. So it seems more natural to conclude that EAs are slowly increasing in engagement. As I say, we'll go into this in more detail in this year's series though.
One other thing to bear in mind about growth in groups is that, as I discussed in my reply to Aaron, this metric may be measuring the fruits of earlier growth more than current growth in the movement. My impression that many groups are founded by people who are not themselves new to EA, so if you get people into the movement in year X, you would expect to see groups being founded some years after they join. This lag may give the false reassurance than the movement is still growing when really it's just coasting on past successes.
An "estimate of the number of groups including outside of the survey sample" wouldn't quite make sense here, because I think we have good grounds to think that we (including CEA) know of the overwhelming majority of groups that exist(ed in 2020) and know that we captured >90% of the groups that exist.
For earlier years it's a bit more speculative, what we can do there is something like what I mentioned in my reply to habryka comparing numbers across cohorts across year to get a sense of whether numbers actually seem to be growing or whether people from the 2019 survey are just dropping out.
Yeh these graphs are purely based on groups who were still active and took the survey in 2019, so they won't include groups that existed in years pre-2019 and then stopped existing before 2019. We've changed the title of the graph to make this clearer.
That said, when we compare the pattern of growth across cohorts across surveys for the LGS, we see very similar patterns across years with closely overlapping lines. This is in contrast to the EA Surveys where we see consistently lower numbers within previous cohort across successive surveys, in line with attrition. This still wouldn't capture groups which come into existence and then almost immediately go out of existence before they have chance to take a survey. But I think it suggests the pattern of strong growth up to and including 2015 and then plateau (of growth, not of numbers) is right.
I fear that most of these metrics aren't measures of EA growth, so much as of reaping the rewards of earlier years' growth. They seem compatible with a picture where EA grew a lot until 2015 and then these EAs slowly became more engaged, moved into different roles and produced different outcomes, without EA engaging significantly more new people since 2015.
We have some concrete insight about the 'lag' between people joining EA and different outcomes based on EA Survey data:
- On engagement, looking at years in EA and self-reported level of engagement, we can see that it appears to take some years for people to become highly engaged. Mean engagement continues to increase up until 5-6 years in EA, at which point it plateaus. (Although, of course, this is complicated by potential attrition, i.e. people who aren't engaged dropping out of earlier cohorts. We'll talk more about this in this year's series).
- The mean length of time between someone first hearing about EA and taking the GWWC pledge (according to 2019 EAS data) is 1.16 years (median 1 year). There are disproportionately more new EAs in the sample though, since (germane to this discussion!) EA does seem to have consistently been growing year on year (although per the above this could also be confounded somewhat by attrition) and of course people who just heard of EA in the last year couldn't have taken the GWWC pledge more than 1 year after they first heard of EA. So it may be that a more typical length of time to take the pledge is a little longer.
- Donations: these arguably have a lower barrier to entry compared to other forms of engagement, yet still increase dramatically with more time in EA.
Of course, this is likely somewhat confounded by the fact that people who have spent more time in EA have also spent more time developing their career and so their ability to donate, but the same confound could account for observed increase in EA outputs over time even if EA weren't growing.
This sentiment came up a fair amount in the [2019 EA Survey data](https://forum.effectivealtruism.org/posts/F6PavBeqTah9xu8e4/ea-survey-2019-series-community-information#Changes_in_level_of_interest_in_EA__Qualitative_Data) about reasons why people had decreased levels of interest in EA over the last 12 months.
It didn't appear in our coding scheme as a distinct category, but particularly within the "diminishing returns" category below, and also in response to the question about barriers to further involvement in the EA community below that, there were a decent number of comments expressing the view that they were interested in having impact and weren't interested in being involved in the EA community.
Just to clarify, the EA Groups Survey is a joint project of Rethink Priorities and CEA (with all analysis done by Rethink Priorities). The post you link to is written by Rethink Priorities staff member Neil Dullaghan.
The writeup for the 2020 survey should be out within a month.
according to CEA's 2020 annual review, they tracked 250 active groups via the EA Groups survey, compared to 176 at the end of 2019. So I think EA, in terms of number of active groups, has actually grown a lot in 2020 compared to 2015-2019.
This isn't a safe inference, since it's just comparing the size of the survey sample, and not necessarily the number of groups. That said, we do observe a similar pattern of growth in 2020 as in 2019.
I was a little surprised not to see the 2019 EA Survey mentioned here since we included around 8 questions about these issues that were written and requested by CEA.
As in your interviews, the 2019 EA Survey also asked people why people that the respondent knew left EA
We also asked whether people’s level of interest in EA had (increased or) decreased and what lead to that change
We asked what factors were important for people’s retention
And we also asked about what barriers people faced to greater involvement in EA
All of these questions can be analysed looking only at the 926 people who were levels 4-5 on the self-reported EA engagement scale. (They could also be analysed looking only at people who reported actually doing specific things like taking the GWWC pledge or changing career plan largely motivated by EA principles.)
I think a sizable advantage of the interviews is that the responses were to open rather than fixed questions (only the ‘reasons why people’s level of interest changed’ question was open comment). Since the categories included in these questions were not very comprehensive, clear or consistent, the results are probably somewhat arbitrarily skewed towards particular categories, while ignoring others, and may not be easy to interpret. On the other hand, the survey results have the advantage of drawing from a much larger sample of people. More than sheer sample size, the fact that the survey respondents are probably a broader/more representative sample of EAs seems important. It’s not clear how representative the views of people who are thought to know about retention are, or whether there are individuals who know much about what is important for retention in EA.
As it stands, I’m not sure which source of information I prefer, but I think I’d strongly prefer the results from an EA Survey with better questions (for example, we could base survey options around the categories you identified in your interviews or our own qualitative data).
Reasons why people left EA
These are the reasons mentioned as to why people (known to the respondents) who were levels 4-5 engagement left EA (based on n=178 responses):
Barriers to higher involvement
Barriers to higher involvement are not (necessarily) the same as reasons to stop being involved, but comparing responses to this question to your table, we can see some overlap. (The results below are for level 4-5 EAs only).
Among the open comment responses, personal issues and being too busy, which may correspond to your last two categories were also commonly mentioned, though unfortunately they weren’t offered as fixed category responses (in which case they probably would have received more responses).
One thing that stands out is that lack of EA friends was a much less commonly cited issue for engaged EAs than among less engaged EAs (see below):
This open comment data is for the whole sample, not only EAs who were level 4-5 on the engagement scale (since it was qualitative data which we analysed separately it would take a while to narrow it down to only highly engaged EAs). Still, it highlights that overall personal issues and people being too busy were commonly mentioned in the broader sample (despite not being included among the fixed options) and these plausibly correspond somewhat to the ‘life event’ and ‘burnout/mental health’ categories you mention.
Factors important for retention
These are the factors selected as being important for retention by level 4-5 EAs. Unfortunately the categories are very different to the other questions and the categories that came up in the interview so it's hard to compare.
Reasons for reduced interest
Below are the reasons (based on people’s qualitative comments) for their having less interest in EA than they did 12 months ago. Only ~18% of respondents reported that their level of interest had decreased, so these number are pretty low.
Labels for these categories are included in Appendix 1 of our post and pasted below.
I agree that psychologial harms (intrinsically) matter and that the fact that some such harms are contingent on the harmed persons having certain beliefs, attitudes or dispositions (i.e. their psychology) raises complicated questions.
That said, I don't think that a simple framework based around whether it is easier to minimise harm by changing the offending 'actions' (fwiw, it seems like this could include broader states of affairs) or the harmed person's psychology, will suffice.
We probably also need to be concerned with whether the harmed person's beliefs are true or false and whether their attitudes are fitting (not merely whether they are fortunate) (see Chappell, 2009).
For example, if Sam comments on Alex's post on the Forum and Alex experiences harm due to taking this in a certain way, it's probably important to know whether their Alex's response is itself appropriate. (Obviously there are various complexities about how this might go: Alex might reasonably/unreasonably have true/false beliefs and have fitting/unfitting attitudes which result in appropriate/inappropriate responses, in any number of different combinations).
We might have non-consequentialist reasons to care about each of these things (i.e. not wanting people to have to form false beliefs or inappropriate attitudes, even if it would lead to fortunate outcomes if they did). A famous example of this concerns the possibility of adaptive preferences, i.e. it seems intuitively troubling if someone or some group who face poor prospects, form low expectations in light of this fact and are thereby satisfied receiving little (and less than they could in better circumstances).
But we might also have consequentialist grounds for not taking a naive approach based on asking whether it would be easier for Alex or Sam to change to reduce the harm caused to Alex. Whichever might seem easier in a particular case or set of cases, it seems reasonable to think there might be significant downstream costs to people having having false beliefs or unreasonable responses. This is especially so given that, as you note, what incentives we establish here can encourage different 'affective ideologies' or different individual psychologies to propagate (especially since people have some capacity to 'tie themselves to the mast' and make it such that they could not cheaply change their attitudes (even if they otherwise would have been able to)).
Thanks for sharing the results and thanks, in particular, for including the results for the particular measures, rather than just the composite score.
high writing scores predicted less engagement... Model (3) shows what is driving this: our measures of open-mindedness and commitment. It is unclear why this is. One story for open-mindedness could be that open-minded applicants are less likely to go all-in on EA socials and events and prefer to read widely. And a story for commitment could be that those most committed to the fellowship spent more time reading the extra readings and thus had less time for non-fellowship engagement.
Taking the results at face value, it seems like this could be explained by your measures systematically measuring something other than what you take them to be measuring (e.g. the problem is construct validity). For example, perhaps your measures of "open-mindedness" or "commitment" actually just tracked people's inclination to acquiesce to social pressure, or something associated with it. Of course, I don't know how you actually measured open-mindedness or commitment, so my speculation isn't based on having any particular reason to think your measures were bad.
Of course, not taking the results at face value, it could just be idiosyncracies of what you note was a small sample. It could be interesting to see plots of the relationship between some of the variables, to help get a sense of whether some of the effects could be driven by outliers etc.
The simplest thing you could do to improve this would be to measure engagement for all the people who applied and then re-estimate the correlation on the full sample, rather than the selected subsample... However, a lot of them are explicitly linked to participation in the fellowship which biases it towards fellows somewhat, so if you could construct an alternative engagement measure which doesn't include these, that would likely be better.
The other big issue with this approach is that this would likely be confounded by the treatment effect of being selected for and undertaking the fellowship. i.e. we would hope that going through the fellowship actually makes people more engaged, which would lead to the people with higher scores (who get accepted to the fellowship) also having higher engagement scores.
But perhaps what you had in mind was combining the simple approach with a more complex approach, like randomly selecting people for the fellowship across the range of predictor scores and evaluating the effects of the fellowship as well as the effect of the initial scores?
Thanks for the info! I guess that even if you aren't applying such strong selection pressure yourselves in some of these years, it could still be that all your applicants are sufficiently high in whatever the relevant factor is (there may be selection effects prior to your selection) that the measure doesn't make much difference. This would still might suggest that you shouldn't select based on this measure (at least while the applicant pool remains similar), but the same might not apply to other groups (who may have a less selective applicant pool).
It's hard to tell without seeing the data, but do you think you might have faced a range restriction problem here? i.e. if you're admitting only people with the highest scores, and then seeing whether the scores of those people correlate with outcomes, you will likely have relatively little variation in the predictor variable.
Another one that plausibly applies to aid/charity within the global poverty field is that many donors under-estimate the difference in effectiveness between interventions relative to experts. (Caviola et al, 2020)
Yeah, I think it's very difficult to tell whether the trend which people take themselves be perceiving is explained by there having been a larger amount of low hanging fruit in the earlier years of EA, which led to people encountering a larger number of radical new ideas in the earlier years, or whether there's actually been a slowdown in EA intellectual productivity. (Similarly, it may be that because people tend to encounter a lot of new ideas when they are first getting involved in EA, people perceive the insights being generated by EA as slowing down). I think it's hard to tell whether EA is stagnating in a worrying sense in that it is not clear how much intellectual progress we should expect to see now that some of the low hanging fruit is already picked.
That said, I actually think that the positive aspects of EA's professionalisation (which you point to in your other comment) may explain some of the perceptions described here, which I think are on the whole mistaken. I think in earlier years, there was a lot of amateur, broad speculation for and against various big questions in EA (e.g. big a priori arguments about AI versus animals, much of which was pretty wild and ill-informed). I think, conversely, we now have a much healthier ecosystem, with people making progress on the myriad narrower, technical problems that need to be addressed in order to address those broader questions.
Roughly speaking, I would predict a bunch of traits related to cognition (largely related to being more deliberative) and moral motivation (e.g. empathy) would likely be correlated. Another way to think about this would be as tracking the effectiveness and the altruism respectively.
On the moral motivation side: potentially higher Empathic Concern from the IRI (we tested this in the 2018 survey and nothing jumped out). I think it's possible that the Empathic Concern measures track too much of the purely intuitive or emotional side of empathy (see Bloom), rather than the pure construct of compassion, or being motivated to help people. It also seems possible that EAs (on average) place higher importance on morality in their self-identity. I also expect there to be some things which crosscut to the cognitive and moral-motivational groups here, for example, systematising versus empathy and people versus things.
My sense is that these two sets of things, roughly speaking, each contribute to making people more inclined to to be more utilitarian. So I would expect measures of utilitarian thinking, like the Oxford Utilitarianism Scale to somewhat pick up on these. I don't think this implies anything particularly strongly about whether people who explicitly adopt a non-utilitarian philosophy can be EAs or whether there is any logical conflict, since I think we should distinguish between the psychological tendency to think in a utilitarian (or more strictly speaking, consequentialist) way and explicit endorsement of the philosophy of utilitarianism or anything else (since most people don't explicitly endorse any moral philosophy).
Also, although people talk a lot about the big five and we have used that before, I think if we used to the closely related HEXACO six factor model, then Honesty-Humility would also likely be correlated.
The gender question and many of the other demographic questions were selected largely to ensure comparability with other surveys run by CEA.
That aside, I think your claim that open comment gender questions are "considered poor survey technique" is over-stated. The literature discusses pros and cons to both formats. From this recent article in the International Journal of Social Research Methodology:
One of the simplest ways to collect data on gender identity is to use an open text box (see
Figure 2) which allows participants the freedom to describe their gender in whatever way they
see fit while accommodating changing norms around acceptable terminology. Terms commonly used around gender evolve over time... It would therefore be misguided of researchers to attempt to find the most contemporary terminology and use it to the exclusion of all other terms. Research teams are also likely to find such a process difficult and frustrating (Herman et al., 2012). Thus, an open text box is certainly the most accommodating approach to a range of evolving terms to describe gender identity.
If open text boxes are used for research that intends to analyze by category, however,
researchers will still ultimately be categorizing the gender identities in order to define groups
for statistical analysis and groups to which the findings might be generalized... These decisions will also need to be made if researchers using a multiple-choice approach choose to provide a long list of as many gender identity terms as possible. This approach is a fine option, but researchers need to be cognisant that terminology that was in common use when a tool was published may no longer be current when research is conducted using that tool... Good arguments can be made for the value of participants being able to see the specific term for their gender identity among a list of possibilities, but even Herman’s and Kuper’s lists, published within the past decade, contain terms that are increasingly considered problematic and do not contain some terms that are more common today.
An approach which provides a smaller number of options for gender identity has benefits
and drawbacks. Providing fewer categories inevitably forces gender minority participants to
place themselves into categories that the researcher provides, but gives the advantage that the
participant, not researcher, chooses the categories in which they will be included.
Thanks for the suggestion. We have considered it and might implement it in future years for some questions. For a lot of variables, I think we'd rather have most data from almost all respondents every other year, than data from half of respondents every year. This is particularly so for those variables which we want to use in analyses combined with other variables, but applies less in the case of variables like politics where we can't really do that.
Thanks for your feedback! It's very useful for us to receive public feedback about what questions are most valued by the community.
Your concerns seem entirely reasonable. Unfortunately, we face a lot of tough choices where not dropping any particular question means having to drop others instead. (And many people think that the survey is too long anyway implying that perhaps we should cut more questions as well.)
I think running these particular questions every other year (rather than cutting them outright) may have the potential to provide much of the value of including them every year, given that historically the numbers have not changed significantly across years. I would be less inclined to think this if we could perform additional analyses with these variables (e.g. to see whether people with different politics have lower NPS scores), but unfortunately with only ~3% of respondents being right-of-centre, there's a limit to how much we can do with the variable. (This doesn't apply to the diet measure which actually was informative in some of our models.)
If you are referring to the question I think you're referring to, then we really do mean that people should select up to one option in each column: one column for whichever of the options (if any) was the source of the most important thing you learned and one column for whichever of the options (if any) was the source of the most important new connection you made.
I don't say that many/most small donors have a "cool and unusual idea for donations that probably won't get funded otherwise." I say that Jonas may have a higher bar for this than I do, and this may partly explain where we disagree. I also said that I think that it could be the case that "many/most small donors (who are considering donating to specific charities) would do better to try to explore and evaluate these opportunities themselves." But that's only partly due to me (possibly) thinking that more small donors have cool and unusual ideas than Jonas does. It's doubtless also due to more substantive differences. For example, I also think that it may be more beneficial for many donors who are considering donating to specific charities to try to think about how to make those donations themselves, because it provides important sources of experimentation/information/donor-practice for the donor and the community, even if those donations don't meet whatever the bar is for "cool and unusual."
I think those benefits probably obtain in a lot of cases even where the donor is considering donating to "fairly well-established charities", because the donor is still at least thinking about donating to different charities and about donation in general, and the community is getting information about whether the community at large think that this or that fairly well-established charity is more promising, as well as about the extent to which the more well-established charities are better options than less well-established charities.
And as I mention in my comment, there are still other reasons that I think underlie the disagreement (not merely our conceptions of "cool and interesting" donors), which I discuss in the other threads.
These mostly seem like good things to try. It might be worth experimenting with a number of different ergonomic devices to see which work best for you (which work best seems to be very individual, anecdotally).
Regarding wrist braces: it's been a while since I looked into this, so I don't have a reference to hand and you may be more up to date than I am, but my recollection is that these were recommended for carpal tunnel syndrome, but not for RSI. Fwiw, I would guess that only wearing it at night probably avoids the theorised harms of wearing one (forcing your wrist into unnatural positions and/or leading to weakening of the area) and may still have some benefits, but you'd probably want to look into it yourself (as you may already have done).
Using your other hand for your phone where necessary also seems like a good idea, but I'd be careful about relying on this too much, in case you just injure that hand more too. I think I also used the mistake of trying to use my phone rather than my computer for things too much, despite using my phone being pretty bad for my hands too. It might be worth seeing whether you find a different (e.g. smaller) phone more ergonomic too, since I find that stretching across a large phone can be a strain.
Based on my own experience I would definitely recommend being very cautious about RSI, i.e. especially resting carefully, as well as investing in solutions like more ergonomic devices, voice control, reading different resources (e.g. about good posture and different solutions) and visiting physiotherapists and other specicialists. I was largely unable to type or use a computer for 2-3 years due to RSI and I attribute a lot of this not having rested enough early on (despite the fact that I actually did reduce my activity quite dramatically almost immediately upon experiencing symptoms).
Another thing I would note is that although I think it's good to seek help from different experts, I would treat this very critically. I received completely conflicting, but entirely confidently expressed, diagnoses and recommendations from a number of different GPs, physiotherapists and consultant rheumatologists. Some of the literature I read myself also explicitly suggested that tendinopathies tended to be poorly understood by frontline medics, though I'm not in a position to evaluate whether that is the case (or at least true relative to other conditions). Some of the things which were recommended seem to have some evidence suggesting potential for harm (e.g. strengthening exercises, anti-inflammatories and immobilising wrist braces), so there are some grounds for caution.
One of the few things I would recommend that wasn't mentioned in Max's post, so far as I recall, and isn't mentioned in a lot of resources was keeping your hands warm, but I see you mentioned that in your own comment. There also seems to be some evidence that nutrition can be relevant for tendon healing (assuming that your RSI is related to your tendons): see this review. The main things they point to are vitamin C, taurine, vitamin A, glycine, vitamin EA and leucine.
I'm also happy to talk about this 1-1 if you like.
This [lottery winners feeling they should pick some specific charities to donate to the best of their abilities, rather than donating to EA Funds] could be good if the donors allocate the money better than EA Funds could!
This is certainly possible. But it also seems quite possible that the allocation made by a randomly selected donor (who thinks about it a bit more than they usually would but also feels distinctive pressure to, choose specific charities, rather than delegate the decision to the funds, and maybe other pressures as well) is worse than the allocation made by lots of individual donors, some/many of whom decide they can't do better than to defer to the Funds.
That said, if many donor lottery winners turned out to have a bias towards making their own grants, and had a less good track record than EA Funds, that would convince me that your concern is probably right. But I think it's worth running a larger experiment before giving a lot of weight to these concerns.
I agree an "experiment" might be informative. I think we should assign these various concerns quite high weight before we run an experiment though (although I'd be happy to be talked into thinking that they are less likely than I currently think they are). Whether we should then run the experiment depends presumably on how great and how likely the possible benefits and costs seem to be, including how easily we think we could retrench the costs of the experiment if they turned out to be real (e.g. convince people that they shouldn't donate to the lottery after all and should instead be deferring or donating directly).
I think if we view this as an experiment (but grant that it may well lead to worse allocations of donations overall and reduce discourse and information quality for the community), that would make sense, but that the recommendation in the original post that most small donors donate to the EA lottery should be presented much more tentatively (making clear that this is an experiment that might lead to worse outcomes and will need to be re-evaluated in the future). This would reduce costs in the event that it turns out that it's actually better to encourage many donors to donate directly themselves, defer to the Funds, save to donate later etc.
If the worry is that smaller groups will have a harder time fundraising, I don't think this will be the case
This actually wasn't one of my concerns. It does seem pretty clear that donations would be allocated across a smaller number of donation targets, if they are decided by only a small number of lottery winners. (Historically, it seems that each winner has selected only 1-4 donation targets. It's less clear if we have as many as 200 winners, but that seems relatively unlikely in the near term). Donations being allocated to a much smaller number of donation targets than they would be if donors made their allocations separately could be better or it could be worse, it seems quite hard to tell.
Generally, it seems like there are a lot of considerations that would determine whether most small donors donating to the lottery should be expected to be a positive or negative move (mostly depending on whether the allocation made by a small number of randomly selected donors (influenced by certain conditions) is better than the allocation made by a larger number of individual donors (in different conditions) and which of these has the better influence on EA discourse and information, including depth of investigation, diversity of thought and novelty of experimentation etc.), whereas the original post seems to present the situation as being quite straightforward (based largely on the one argument about how the winner(s) will be in a better position to make decisions than those particular individuals would have been if they didn't win. That said I'm sure you've thought about these questions more than me so your intuitions about them likely better tutored than mine.
I think most donors giving most of their donations through the donor lottery is more likely to improve than worsen this because:
If most EAs participate in the lottery, we will have many lottery winners (not just a single one), who can jointly cover a lot of ground. (And if only few EAs participate, we will still have lots of EAs making direct donations.)
I think the hypothetical state of affairs, where so many individual donors donate to the lottery that there are lots of lottery winners, is harder to intuitively evaluate than the closer counterfactuals where we have lots of individual donors or a much smaller number (~1-3) lottery winners. One might think that, to the extent that the same basic dynamic of converting many individual donors evaluating charities for themselves for 1 donor spending a bit longer evaluating for themself, the situation where you have lots winners (and even more people not donating for themselves) is not much better than the situation with just a small number of winners (and it may be proportionately worse). But I agree that the dynamics may well be different and non-linear, i.e. it could be than it's optimal to have 15 winners looking into things a bit deeper than they would have as individual donors (and all the other donors not look into things hardly at all), because this leads to the best balance of ideas evaluated at different levels of depth, and it's worse to have either fewer winners and more individual donors or more winners and fewer individual donors. But exactly what these dynamics are is unclear and this seems like the kind of thing we'd want to know in order to know whether to recommend donors in general to donate to the lottery, rather than to donate themselves, immediately defer to other grant-makers, or save for later.
We explicitly encourage people to continue to make direct donations with some of their budget, so if people agree with this post, we will continue to have lots of people thinking about their donation decisions (and potentially writing them up).
I think it's good that you encourage this, but I'm not sure that this is going to be much of a safeguard. It seems quite likely that if people donate much of their donations through the lottery, then even if they continue to donate a small amount directly, many will spend much less time/effort considering or writing up these direct donations. This seems the natural mirror image of the fact that the donor lottery is expected to make the one donor more invested in taking their time to make a decision- it should make everyone else less engaged in their (remaining) donation decisions. Personally, I am more convinced of the latter (negative) effect than I am of the positive effect, but you might think it's asymmetrical in the opposite direction. It's also worth considering that, when people are donating a smaller amount of residual donations just to keep themselves engaged with 'warm fuzzies' / invested in their donations / engaged with the current state of EA research, this might lead them to take a different approach to thinking about their donations i.e. they might be more inclined to donate on a whim, in line with their 'warm fuzzies' confident that at least their EA lottery donation had high expected value. This would also lead to the loss of a lot of careful consideration of EA donation targets from a lot of EA donors.
I am pretty happy to trade shallow analysis for more deep analysis, as I think a lot of the shallow analyses will be similar to each other (so won't provide as much viewpoint diversity as multiple deeper analyses).
I think "deeper" definitely sounds better than "shallower." I'm not sure that's exactly what I'd expect to see in this case though.
It seems the EA lottery basically induces one person to think about their donation decision a bit more than they personally would have done counterfactually, while (probably) inducing a number of other people to think about their donation decisions less. (I'll leave aside the complication about trading even more individual donors for even more winners for now) As I noted above, I think we may lose a lot of depth across lots of individuals, and gain a bit of depth for onen individual, so it's not so clear to me prima facie, which is better.
But another complication is that with lots of people donating individually you have a chance of getting a writeup from each of these. But you are probably disproportionately more likely to get an writeup out of some of the most informed and likely-to-write-a-deep-and-valuable-writeup of these people. So you might actually get a more deep writeup (in expectation) out of lots of individual donors than a randomly selected lottery donor who is incentivised to write more in depth than they personally would have otherwise. Now, one could speculate that a separate virtue of the randomisation is that although you are less likely to get an especially in-depth writeup from one of the more informed donors, you are more likely to get a moderately in-depth writeup from a non-typical donor who wouldn't otherwise produce a writeup, so this might be better for viewpoint diversity. I don't know exactly how to weight these considerations, but these seem like the kind of things which would determine whether we should be encouraging more small donors to donate to the lottery or not.
The question of whether we get more viewpoint diversity from one (or very few) lottery winner(s) thinking somewhat more deeping and (possibly) producing a writeup or more people thinking more about their donations (and possibly producing writeups) seems pretty uncertain. I acknowledge that it's possible that the many individual donors might be more similar to each other and so produce less novel insight than single the lottery winner. It also seems quite possible that the one lottery winner thinking a bit more than they usually would doesn't really produce much more novel insight and you get more diversity of thought having more people think about things individually. This may also be the case if, similar to my point about depth in the previous paragraph, most of the diversity/novelty comes from a small number of highly novel thinkers, and randomly selecting a winner just gives a roughly average (non-novel, non-diverse) answer.
As an aside, in general, if we were thinking of setups to best promote EA discourse and information I'm not sure the lottery setup would be among the ways we'd think of going about this. I may writeup something brief about this separately.
some may come away thinking "I had a pretty cool and unusual idea for using my donations that probably won't get funded otherwise, but now I will give to the donor lottery instead." I would prefer that this person didn't participate in the lottery, and instead evaluate and support the novel opportunity they came up with. I think individual donors exploring such opportunities on their own is an important source of experimentation and viewpoint diversity in the EA community and it seems better for them to continue doing so instead of supporting the lottery.
Thanks! Given this acknowledgement, it's not clear where and to what extent we disagree. I assume that you think this description applies to a smaller number of donors than I do. Perhaps you have in mind a higher bar for the donor having a 'cool and unusual idea for donations that probably won't get funded otherwise' than I would, whereas I think that could be that many/most small donors (who are considering donating to specific charities) would do better to try to explore and evaluate these opportunities themselves (which I suspect leads to lots of individuals evaluating lots of different opportunities, rather than a smaller number of random individuals investigating). I'll respond to the specific points in the threads replying to my original comment.
In any case, this makes me think that it might be valuable for more time to be spent working out and spelling out in what specific conditions donors would be well advised to give to the lottery and how donors can try to discern whether they would be best advised to donate to the lottery (and possibly donate a larger sum later), donate based on their specific evaluations now, just defer to other grant-makers now, or even defer to later donors/later versions of themselves (without donating to the current year lottery) by saving their money to donate later.
Although I'm glad that your comment now prominently displays one reason why it might be better for people to not donate to the lottery, I think the original post gives the very strong impression that most donors should be donating via the lottery without discussion of these complexities. Since a lot of EAs seem very deferential, I worry that there's a large risk that a lot of donors will just defer to this recommendation without much consideration of whether there are reasons not to. (Historically there seem to have been a few cases where EAs have deferred en mass to apparent clear signals (e.g. regarding earning to give, not earning to give, ops work, EA direct work) and then there's had to be a reversal when it's pointed out that there are lots of nuances or exceptions that, for whatever reason, people didn't infer the first time around).
I find the self-regarding case for donating (that you have equal or higher expected value, since you have a lower probability of winning a proportionately higher amount, and you might benefit from donating at scale) pretty convincing.
I'm not about the other-regarding case for encouraging small donors in general to donate to the lottery however, i.e. whether this leads to a better or worse allocation of donations.
You mention one reason why this might lead to better donations: the winner will be incentivised to spend more time thinking about their donation than they otherwise would. However, it seems like there are some reasons why the allocation made by the single winner might be worse than the allocation made by each of the individual donors' separate decisions. (I have not thought about donation lotteries very much so I am likely missing other considerations).
As you note, winning the donation lottery probably causes the individual donor to spend more time making their individual donation decision than they otherwise would. But it also likely leads to the individual donor allocating funds to a smaller number of donation targets than the individual donors to the lottery would have done had they donated separately, and probably leads to less money being donated to EA Funds than the individual donors would have done collectively. (I imagine that winning the donor lottery probably leads to people being less inclined to just donate the money to EA Funds, which might seem like a 'waste' of their win, which would signal that they don't think they can make good donation decisions. It may also lead them to wanting to make novel or idiosyncratic donations decisions, rather than donating for similar reasons.) It's pretty unclear to me that the individual winner (even with the advantage of spending somewhat more time on their decision than they otherwise would) would make a better allocation of the donations than would all the individual donors making their donation decisions separately (and potentially all acting a bit more deferentially/with different incentives than the lottery winner).
In terms of improving EA discourse and information it also seems unclear to me that the effect of one lottery winner thinking more about their donation decisions (and potentially writing it up) beats out the effect of all the other lottery donors thinking about their donation decisions (and potentially writing them up).
I agree this won't be an incentive to many EAs. So long as it serves as an incentive to some respondents, it still seems likely to be net positive though. (Of course, it's theoretically possible that offering the prize might crowd out altruistic motivations (1) (2) (3), but we don't have an easy way to test this and my intuition is that the overall effect would still be net positive).
I would hope that concerns about being less well placed to make the donation would not incentivise people to not take the EA Survey, just so that they don't risk winning the prize and making a sub-optimal donation. If the respondent doesn't feel comfortable just delegating the decision elsewhere, they could always decline the prize, in which case it could be given to another randomly selected respondent.
Thanks Alex! Yeh, due to the space constraints you mention, we're planning to run some questions (which mostly stay very similar across multiple years) only every other year. The same thing happened to politics and diet.
This is, of course, not ideal, since it means that we can't include these variables in our other models or examine, for example, differences in satisfaction with EA among people with different religious stances or politics, every other survey.
Thanks for explicitly mentioning that you found these variables useful. That should help inform discussion in future years about what questions to include.
That should work well for you this year then: this year you'll report how much you donated in 2019 (and how much you plan to donate in 2020). Next year you'll report how much you actually donated this year and how much you plan to donate overall next year.
Roughly speaking, there seem to be two main benefits and two main costs to making an anonymised dataset public. The main costs: i) time and ii) people being turned off of the EA Survey due to believing that their data will be available and identifiable. The main benefits: iii) the community being able to access information (which isn't included in our public reports) and iv) transparency and validation from people being able to replicate our results.
Unfortunately, the dataset is so heavily anonymised in order to try to reduce cost (ii) (while simultaneously increasing cost (i)), that it seems impossible for people to replicate many of our analyses (even with the public dataset), because the data is so heavily obscured, essentially vitiating (iv). We have considered, and are considering, other options like producing a simulated dataset for future surveys in order to allow people to complete their own analyses, if there were sufficient demand, but this would come at an even higher time cost. Conversely, it seems benefit (iii) can be attained, in the main, without releasing a public dataset, just by producing additional aggregate analyses on request (where possible).
Of course, we'll see how this system works this year and may revisit it in the future.
The questions about positive/negative influences were CEA requests (although we did discuss them together): I believe the rationale is that for positive influences, they were interested in the most important influences (and wanted to set a higher bar by preventing people indicating that more than three things had the “largest” influence on them), whereas for the possible negative influences, they were interested in anything which had a negative influence, not merely the largest negative influences.
Regarding donations: historically, we have always asked about the previous year’s income and donations (because these are known quantities) and then planned donations for the year the survey is run (since people likely won’t know this for sure, but it’s still useful to know, for broader context. Now that we launch the survey right at the end of the year, the difference between past and planned donations is likely less acute. Naturally, it would be ideal if we could ask for income and donation data for both the previous year and the present year, but we constantly face pressure to include other questions, while trying to maintain survey length, so we have had to leave out a lot of things. (This also explains why we had to cut the questions we had in previous years asking about ‘individual’ and ‘household’ figures, given that many people’s earnings/donations are part of a unit).
There is research on the links between downward social mobility and happiness, however:
These empirical studies show little consensus when it comes to the consequences of intergenerational social mobility for SWB: while some authors suggest that upward mobility is beneficial for SWB (e.g. Nikolaev and Burns, 2014), others find no such relationship (e.g. Zang and de Graaf, 2016; Zhao et al., 2017). In a similar vein, some researchers suggest that downward mobility is negatively associated with SWB (e.g. Nikolaev and Burns, 2014), while others do not (e.g. Zang and de Graaf, 2016; Zhao et al., 2017)
This paper suggests that differences in culture may influence the connection between downward social mobility and happiness:
the United States is an archetypical example of a success-oriented society in which great emphasis is placed on individual accomplishments and achievement (Spence, 1985). The Scandinavian countries are characterized by more egalitarian values (Schwartz, 2006; Triandis, 1996, Triandis and Gelfand, 1998; see also Nelson and Shavitt, 1992)...A great cultural salience of success and achievement may make occupational success or failure more important markers for people’s SWB.
And they claim to find this:
In line with a previous study from Nikolaev and Burns (2014) we found that downward social mobility is indeed associated with lower SWB in the United States. This finding provides evidence for the “falling from grace hypothesis” which predicts that downward social mobility is harmful for people’s well-being. However, in Scandinavian Europe, no association between downward social mobility and SWB was found. This confirms our macro-level contextual hypothesis for downward social mobility: downward social mobility has greater consequences in the United States than in the Scandinavian countries.
This is, of course, just one study so not very conclusive.
I agree that if the first two premises were true, but the third were false, then EA would still be important in a sense, it's just that everyone would already be doing EA
Just to be clear, this is only a small part of my concern about it sounding like EA relies on assuming (and/or that EAs actually do assume) that the things which are high impact are not the things people typically already do.
One way this premise could be false, other than everyone being an EA already, is if it turns out that the kinds of things people who want to contribute to the common good typically do are actually the highest impact ways of contributing to the common good. i.e. we investigate, as effective altruists and it turns out that the kinds of things people typically do to contribute to the common good are (the) high(est) impact. [^1]
To the non-EA reader, it likely wouldn't seem too unlikely that the kinds of things they typically do are actually high impact. So it may seem peculiar and unappealing for EAs to just assume [^2] that the kinds of things people typically do are not high impact.
[^1] A priori, one might think there are some reasons to presume in favour of this (and so against the EA premise), i.e. James Scott type reasons, deference to common opinion etc.
[^2] As noted, I don't think you actually do think that EAs should assume this, but labelling it as a "premise" in the "rigorous argument for EA" certainly risks giving that impression.
This is true, although for whatever reason the responses to the podcast question seemed very heavily dominated by references to MacAskill.
This is the graph from our original post, showing every commonly mentioned category, not just the host (categories are not mutually exclusive). I'm not sure what explains why MacAskill really heavily dominated the Podcast category, while Singer heavily dominated the TED Talk category.
Novelty: The high-impact actions we can find are not the same as what people who want to contribute to the common good typically do.
It's not entirely clear to me what this means (specifically what work the "can" is doing).
If you mean that it could be the case that we find high impact actions which we not the same are what people who want to contribute to the good would typically do, then I agree this seems plausible as a premise for engaging in the project of effective altruism.
If you mean that the premise is that we actually can find high impact actions which are not the same as what people who want to contribute to the common good typically do, then it's not so clear to me that this should be a premise in the argument for effective altruism. This sounds like we are assuming what the results of our effective altruist efforts to search for the actions that do the most to contribute to the common good (relative to their cost) will be: that the things we discover are high impact will be different from what people typically do. But, of course, it could turn out to be the case that actually the highest impact actions are those which people typically do (our investigations could turn out to vindicate common sense, after all), so it doesn't seem like this is something we should take as a premise for effective altruism. It also seems in tension with the idea (which I think is worth preserving) that effective altruism is a question (i.e. effective altruism itself doesn't assume that particular kinds of things are or are not high impact).
I assume, however, that you don't actually mean to state that effective altruists should assume this latter thing to be true or that one needs to assume this in order to support effective altruism. I'm presuming that you instead mean something like: this needs to be true for engaging in effective altruism to be successful/interesting/worthwhile. In line with this interpretation, you note in the interview something that I was going to raise as another objection: that if everyone were already acting in an effective altruist way, then it would be likely false that the high impact things we discover are different from those that people typically do.
If so, then it may not be false to say that "The high-impact actions we can find are not the same as what people who want to contribute to the common good typically do", but it seems bound to lead to confusion, with people misreading this as EAs assuming that he highest impact things are not what people typically do. It's also not clear that this premise needs to be true for the project of effective altruism to be worthwhile and, indeed, a thing people should do: it seems like it could be the case that people who want to contribute to the common good should engage in the project of effective altruism simply because it could be the case that the highest impact actions are not those which people would typically do.