To Grow a Healthy Movement, Pick the Low-Hanging Fruit
post by arikagan
Is movement building the right approach?
Where do we go from here?
This post is co-authored by Nick Fitz and Ari Kagan and is cross-posted from The Life You Can Save's blog.
We’ve all met people who will never be open to the ideas of effective altruism (EA) no matter the framing: your neighbor who thinks that doing good is purely a matter of personal preference or your uncle who argues it’s wrong to compare charities because they’re all doing good. Despite hours of discussion, no real progress is made. We’ve all also had the opposite experience: your friend who’s on board a minute into the conversation; the one who was excited to learn that others thought the same way that she already did. You were likely such a person yourself – you merely needed to learn of the movement to become a supporter.
Much of the debate about how to build the EA movement is focused on how to frame issues to convince people of the merits of the general EA worldview. Should we talk of giving as an opportunity or an obligation? Should we emphasize global health and poverty over artificial intelligence and animal ethics? Such discussions are vital, but crafting our message can only get us so far. Many potentially highly-engaged effective altruists (EAs) haven’t even heard of EA yet. If we knew who these folks were, we could grow the movement far more quickly and sustainably. It’s far more effective to prioritize identifying people who are already highly predisposed towards the tenets of effective altruism than it is to struggle with those who are a much harder “sell.”
In the ”Awareness/Inclination [? · GW]” model developed by the Centre for Effective Altruism (CEA), Cotton-Barratt proposes that movement-building is made up of two elements: making people more aware of EA ideas and making people more inclined towards EA ideas. CEA notes that it’s much harder to increase inclination than awareness: “Our current way of resolving this is to focus our efforts on increasing the awareness of those who already have relatively high inclination.” This isn’t to say that we should be closed to people radically changing their beliefs (quite the opposite), but let’s prioritize the low-hanging fruit.
To do so, we need to find people who are highly inclined towards EA, yet currently unaware of EA ideas. Indeed, a forthcoming agenda from The Life You Can Save explicitly aims to target “demographics that are likely to be receptive to effective giving concepts relative to the general population.” We’ve all come to assume it’s the white, atheist, male, developer in the bay who listens to Sam Harris and reads Less Wrong. Perhaps for good reason: the 2017 EA survey found that the current movement is 89% white, 81% atheist, 81% ages 20-35, and 70% male. But are these numbers representative of those receptive to effective altruism? Or do they simply reveal how EA has historically spread through homophilic social networks, software engineering teams, and elite universities.
In a recent study, we’ve explored these questions to identify who may already be receptive to EA. We ran an online study with 530 Americans to determine what predicts support for EA. After participants read a general description of EA, they completed measures of their support for EA (e.g., attitudes and giving behaviors). Finally, participants answered a collection of questions measuring their beliefs, values, behaviors, demographic traits, and more.
The results suggest that the EA movement may be missing a much wider population of highly-engaged supporters. For example, not only were women more altruistic in general (a widely replicated finding), but they were also more supportive of EA specifically (even when controlling for generosity). And whites, atheists, and young people were no more likely to support EA than average. If anything, being black or Christian indicated a higher likelihood of supporting EA. Moreover, the typical stereotype of the “EA personality” may be somewhat misguided. Many people - both within and outside the community - view EAs as cold, calculating types who use rationality to override their emotions—the sort of people who can easily ignore the beggar on the street. Yet the data suggest that the more empathetic someone is (in both cognition and affect), the more likely they are to support EA. Importantly, another key predictor was the psychological trait of ‘maximizing tendency,’ a desire to optimize for the best option when making decisions (rather than settle for something good enough). That is, it’s not enough to just care about others, or to only have a tendency to optimize. When a person scores high on both empathy and maximizing, they’re much more likely to endorse EA. We set out to increase the return on investment of movement-building work; in doing so, we discovered an underexplored, much more diverse group interested in EA, and found that the EA personality may be somewhat misunderstood. For more on these results, see this brief summary.
Is movement building the right approach?
Most agree that the movement needs more highly committed supporters, but the questions is how to acquire them. There are concerns that movement building may lead to “movement dilution,” the worry “that an overly simplistic version of EA, or a less aligned group of individuals, could come to dominate the community, limiting our ability to focus on the most important problems.” To avoid this problem, some suggest that we should instead focus on increasing the commitment of existing EAs. While neither focus precludes the other, traditional movement building approaches may have thus far primarily attracted low-alignment supporters, making it less valuable. Spreading ideas through the mass media is a low fidelity way to communicate, and framing EA to better convince people may only persuade those who aren’t closely aligned. The approach we advocate for here sidesteps these worries: evidence-based movement-building can help us identify highly-aligned supporters. The minimal effort required to increase the awareness of someone who is already highly-inclined may immediately result in a high level of contribution to the movement. The full discussion is outside the scope of this post, but this trade-off merits more evaluation.
Where do we go from here?
This work is just the tip of the iceberg, and there’s much more to sort out. Not enough work has been done to craft evidence-based models of sustainable movement-building. Some organizations are starting to build these tools: Students for High-Impact Charity (SHIC) has started to “collect a variety of metrics to find students most inclined to engage with effective charity in the long-term” and the Giving Game Project is also collecting survey data toward a similar end. Once we have enough data on key traits predicting openness to the ideas of EA, we should work to find groups of people that are high in these traits and focus our efforts where there will be greater return. We’d love for this to snowball in the community as the EA movement incorporates data-driven decision-making more generally.
If the EA movement better understands who may already be interested in what it has to say, then it will be able to grow sustainably at a much faster rate. This may seem obvious, but it is not the current approach. This could change not only who we try to engage with effective altruism, but how we hire people, how we run fundraising campaigns, and more. It’s time to understand who tends to support effective altruism, and why. By embracing evidence-based approaches to building the evidence-based altruism movement, we can do even more good with our resources.
If you're interested in learning more about the study, we welcome emails at email@example.com and firstname.lastname@example.org.
Comments sorted by top scores.
comment by Denise_Melchin ·
2018-06-06T22:38:18.392Z · EA(p) · GW(p)
I’m really curious which description of
EA you used in your study, could you post that here? What kind of attitudes towards EA did you ask about?
I can imagine there might be very different results depending on the framing.
My take on this is that while many more people than now might agree with EA ideas, fewer of them will find the lived practice and community to be a good fit. I think that’s a pretty unfortunate historical lock in.
Replies from: Peter_Hurford
↑ comment by Peter Wildeford (Peter_Hurford) ·
2018-06-06T23:41:17.004Z · EA(p) · GW(p)
I’m really curious which description of EA you used in your study, could you post that here? What kind of attitudes towards EA did you ask about?
+1. There's a big gap, I'd guess, between "your dollar goes further overseas" and "we must reduce risk from runaway AI".
while many more people than now might agree with EA ideas, fewer of them will find the lived practice and community to be a good fit. I think that’s a pretty unfortunate historical lock in
Serious question: Could we start a new one?
Replies from: arikagan, NickFitz
↑ comment by arikagan ·
2018-06-07T15:14:04.785Z · EA(p) · GW(p)
As Nick said, it would be wonderful to see follow-up studies here that try to flesh out these different aspects. We don't think we're covering everything in EA (although the description Nick posted below is from effectivealtruism.org, so it seemed like a decent first attempt). But that certainly seems correct, you could have very different answers to "who likes extreme altruism", "who likes AI safety", etc.
The community question is particularly interesting one because it might be more of a historical artifact than a necessary trait of the movement. There could be people who would be a perfect fit for ideas of EA (however defined: x-risk, donating 50%, etc), but still might not like the current community. How to actually deal with that finding would be a different question, but it seems like that would be worth knowing.
↑ comment by NickFitz ·
2018-06-07T11:14:36.335Z · EA(p) · GW(p)
Thanks both, great point. We focused the description in this study on the effective giving and career choice aspects of EA, and the results may well be different depending on the framing -- it'd be worth replicating with something like x-risk. Here's the full description (built from ea.org):
"What is Effective Altruism? Thinking carefully about how to do good. Effective altruism is about answering one simple question: how can we use our resources to help others the most? Rather than just doing what feels right, we use evidence and careful analysis to find the very best causes to work on.
Most of us want to make a difference. We see suffering, injustice and death, and are moved to do something about them. But working out what that ‘something’ is, let alone doing it, is a difficult problem.
Which cause should you support if you really want to make a difference? What career choices will help you make a significant contribution? Which charities will use your donation effectively? If you don’t choose well, you risk wasting your time and money. But if you choose wisely, you have a tremendous chance to improve the world.
Effective altruism considers tradeoffs like the following: Suppose we want to fight blindness. For $40,000 we can provide guide dogs to blind people in the US. Or for $20 per patient, we can pay for surgery reversing the effects of trachoma in Africa (a disease which causes blindness). If people have equal moral value, then the second option is more than 2,000 times better than the first."
comment by Catherine Low (cafelow) ·
2018-06-06T23:38:58.393Z · EA(p) · GW(p)
Thanks Ari for your post.
This is very interesting and an important research question. I also believe that EA ideas can be appealing to people far beyond the current demographic of EAs (which I think is strongly influenced by founders effects).
Are you able to share the details of your SEGS scale? I think the details of the scale would be interesting. I can see that you have a high correlation between Empathy and SEGS. In particular I am wondering what the chances that generally altruistic people are choosing the high SEGS answers because they look like the most empathetic answers, even if the person isn't particularly effective in general with their altruism - you may have found a way to feather this out.
And also I can't seem to click the links to the EQ and CRT scales etc in your pdf - that might just be me, but a list of links would be great!
Replies from: NickFitz
↑ comment by NickFitz ·
2018-06-07T11:39:06.092Z · EA(p) · GW(p)
Thanks for this. The SEGS consisted of seven items on a 7-point Likert agree/disagree scale: (1) I am interested in Effective Altruism, (2) I would like to learn more about Effective Altruism, (3) I support the Effective Altruism movement, (4), I would share information about Effective Altruism with people in my network, (5) I identify as an "effective altruist," (6) I would like to meet others who support Effective Altruism, and (7) I will donate my money based on Effective Altruism. We also measured a few more-behavioral outcomes e.g., a windfall donation task (in which participants allocated money between Deworm the World, Make a Wish, a local choir, and keeping it for themselves), and willingness to sign the GWWC pledge. For the SEGS x Empathy relationship, we controlled for past giving behavior to try to feather that out.
Ah yes, the links to the scales don't appear to work in the PDF, here are open-access versions:
comment by Khorton ·
2018-06-06T22:58:36.339Z · EA(p) · GW(p)
This is so interesting! I'd love more details about your methods. For example, for the different identifiers (black/white, Christian/atheist), how many of each group were surveyed? How was the survey group recruited? How significant were the results and what was the effect size? Any extra details would be helpful so I know how much to update.
comment by SiebeRozendal ·
2018-06-07T10:10:53.985Z · EA(p) · GW(p)
I really admire that you did a study about this, but I think that this study shows much less than you claim to. First of all, you studied support for effective giving (EG), which is different from effective altruism as a whole. I would suspect at least the following three factors to really be different between EG and EA:
- Support for cause impartiality, both moral impartiality (measuring each being according to their innate characteristics like sentience or intelligence, rather than personal closeness) and means impartiality (being indifferent between different means to an end, e.g. donating money or choosing a career with direct impact
- Dedication. I believe that making career changes or pledging at least 10% of your income to donate is quite a high bar and much fewer people would be inclined to that.
- Involvement in the community. As you wrote the community is quite idiosyncratic. Openness to (some of) its ideas does not imply people will like the movement.
Of course, not all of this implies that the study is worthless, that getting people to donate their 1 or 2% more effectively is useless, or that we shouldn't try to make the movement more diverse and welcoming (if this can be done without compromising core values such as epistemic rigor). I think there is a debate to be held how to differentiate effective giving from EA as a whole, so that we can decide whether or not to promote effective giving seperately and if so, how.
Replies from: arikagan, NickFitz
↑ comment by arikagan ·
2018-06-07T14:58:56.530Z · EA(p) · GW(p)
Thanks Siebe - while I certainly agree that we don't take the most extreme form of effective altruism, I don't think it's actually as focused on narrow Effective Giving as you suggest. We used that language in the original write up because we wanted it to be accessible to a non EA audience. But if you look at the language of the actual description (Nick posted it above), we took that from effectivealtruism.org, and it actually focuses pretty broadly on trying to do good, not just on donating.
But as we mention, I think this is just the tip of the iceberg, I don't think this research is at all the end of the story. We've been working on a follow-up study that includes cause neutrality, but it would be great to see people study similar questions on more extreme forms of effective altruism, and maybe even include an element of the community.
↑ comment by NickFitz ·
2018-06-07T11:42:15.647Z · EA(p) · GW(p)
Hi Siebe - it's definitely worth distinguishing effective giving, career choice, x-risk, etc. There's likely a whole host of factors that differ between them. To your point (and Peter's question above), it's worth sorting out how we handle this differentiation.