Currently doing social movement and protest-related research at Social Change Lab, an EA-aligned research organisation I've recently started.
Previously, I completed the 2021 Charity Entrepreneurship Incubation Program. Before that, I was the Director & Strategy lead at Animal Rebellion + in the Strategy team at Extinction Rebellion UK, working on movement building for animal advocacy and climate change.
We thought about doing this but ruled it out as there would be a pretty clear bias e.g. the people who are most likely to hear about Just Stop Oil are people who are climate-conscious already, and are therefore more susceptible to positive shifts. I think we did do this informally, and did find a positive correlation between knowledge of Just Stop Oil and the constructs, but I don't think it's particularly robust.
Thinking out loud but maybe one way to control for this might be doing this within groups of people who answered the same to "How concerned are you about climate change" in the first survey, although this might make our sample sizes quite small / no longer representative.
I'm probably missing something but doesn't the graph show OP is under-confident in the 0-10 and 10-20 bins? e.g. those data points are above the dotted grey line of perfect calibration where the 90%+ bin is far below?
I totally agree - like I said above, I don't think paying above market rate is necessarily erroneous, but I was just responding to Khorton's question of how many EA orgs actually paid above market rate. And as you point out, attracting top talent to tackle important research questions is very important and I definitely agree that this is main perk of paying higher salaries.
In this case of research, I also agree! Academic salaries are far too low and benchmarking to academia isn't even necessarily the best reference class (as one could potentially do research in the private sector and get paid much more).
Hey Stefan, thanks again for this response and will respond with the attention it deserves!
I think there are non-trivial numbers of highly committed effective altruists - who would make very careful decisions regarding what research questions to prioritise and tackle, and who would be very careful about hiring decisions - who would not be willing to work for a low salary.
I definitely agree, and I talk about this in my piece as well e.g. in the introduction I say "There are clear benefits e.g. attracting high-calibre individuals that would otherwise be pursuing less altruistic jobs, which is obviously great." So I don't think we're in disagreement about this, but rather I'm questioning where the line should be drawn, as there must be some considerations to stop us raising salaries indefinitely. Furthermore, in my diagrams you can see that there are similarly altruistic people that would only be willing to work at higher salaries (the shaded area below).
Conversely, I think there are many people who, e.g. come from the larger non-profit or do-gooding world would be willing to work for a low salary, but who wouldn't be very committed to effective altruist principles.
This is an interesting point and one I didn't consider. I find this slightly hard to believe as I imagine EA as being quite esoteric (e.g. full of weird moral views) so struggle to imagine many people would be clambering to work for an organisation focused on wild animal welfare or AI safety when they could work for an issue they cared about more (e.g. climate change) for a similar salary.
So I don't think we have any particular reason to expect that lower salaries would be the most effective way of ensuring that decisions about, e.g. research prioritisation or hiring are value-aligned. That is particularly so since, as you notice in the introduction, lower salaries have other downsides.
Again, I would agree thats it's not the most effective way of ensuring value alignment within organisations, but I would say it's an important factor.
For instance, in research on the general population led by Lucius Caviola, we found a relatively weak correlation between what we call "expansive altruism" (willingness to give resources to others, including distant others) and "effectiveness-focus" (willingness to choose the most effective ways of helping others). Expansive altruism isn't precisely the same thing as willingness to work for a low salary, and things may look a bit differently among potential applicants to effective altruist jobs - but it nevertheless suggests that willingness to work for a low salary need not be as useful a costly signal as it may seem.
This was actually really useful for me and I would definitely say I was generally conflating "willingness to work for a lower salary" with "value-alignment". I've probably updated more towards your view in that "effectiveness-focus" is a crucial component of EA that wouldn't be selected for simply by being willing to take a lower salary, which might more accurately map to "expansive altruism".
For these reasons, I think it's better for EA recruiters to try to gauge, e.g. inclinations towards cause-neutrality, willingness to overcome motivated reasoning, and other important effective altruist traits, directly, rather than to try to infer them via their willingness to accept a low salary - since those inferences will typically not have a high degree of accuracy.
I agree this is probably the best outcome and certainly what I would like to happen, but I also think it's challenging. Posts such as Vultures Are Circling highlight people trying to "game" the system in order to access EA funding, and I think this problem will only grow. Therefore I think EA recruiters might face difficulty in discerning between 7/10 EA-aligned and 8/10 EA-aligned, which I think could be important on a community level. Maybe I'm overplaying the problem that EA recruiters face and it's actually extremely easy to discern values using various recruitment processes, but I think this is unlikely.
Thanks for the correction - I'll edit this in the comment above as I agree my phrasing was too weak. Apologies as I didn't mean to underplay the significance of the pay cut and financial sacrifice yourself and others took - I think it's substantial (and inspiring).
Yeah this is a useful way of thinking about this issue of market rate so thanks for this! I guess I think people having the ability to earn more in non-EA orgs relative to EA roles is true for some people, and potentially most people, but also think it's context dependent.
For example, I've spoken with a reasonable number of early career EAs (in the UK) for whom working at EA orgs is actually probably the highest paying options available to them (or very close), relative to what they could reasonably get hired for. So whilst I think it's true for some EAs that EA jobs offer less* pay relative to their other options, I don't think it's universal. I can imagine you might agree so the question might be - how much of the community does it represent? and is it uniform? So maybe to clarify, I think that EA orgs are paying more than I would expect for certain skillsets, e.g. junior-ish ops people, rather than across the board.
Ah yes that's definitely fair, sorry if I was misrepresenting RP! I wasn't referring to intra-organisation when I made that comment, but I was thinking more across organisations like The Humane League / ACE vs 80K/CEA.
Yeah I think this is a good question. I can think of several of the main EA orgs that do this, in particular for roles around operations and research (which aren't generally paid that well in the private sector, unless you're doing it at a FAANG company etc). In addition, community-building pays much higher than other non-profit community building (in the absence of much private sector community building).
Some of these comparisons also feel hard because people often do roles at EA orgs they weren't doing in the private sector e.g. going from consulting or software development to EA research, where you would be earning less than your previous market rate probably but not the market rate for your new job.
There's one example comparison here and to clarify I think this is most true for more meta/longtermist organisations, as salaries within animal welfare (for example) are still quite low IMO. I can think of 3-4 different roles within the past 2 months that pay what is above market rate (in my opinion), some of which I'll list below:
80K paying £58,400 for an operations specialist with one year of experience doing ops. For context, a friend of mine did project management for 2-3 years at a City law firm and was making £40-50k
Rethink Priorities paying $65,000 or £52k for a research assistant. This definitely feels higher than academic research assistants and probably private sector ones too (although not sure what a good reference class is)
Open Phil paying $100,000+ for an operations associate.
CEA expression of interest for a people operations specialist (sounds like a somewhat junior role, I could be wrong) - salary of £56-68,000. Similar to the 80K private sector comparison, I think market rate for this would be closer to £40k for a junior role.
Thanks for the thoughtful engagement Stefan and kind words! I'm going to respond to the rest of your points in full later but just one quick clarification I wanted to make which might mean we're not so dissimilar on our viewpoints.
As far as I understand, you are effectively saying that effective altruists should pay low salaries
Just want to be very clear that low salaries is not what I think EA orgs should pay! I tried quite clearly to use the term 'moderate' rather than low because I don't think paying low salaries is good (for reasons you and I both mentioned). I could have been more explicit but I'm talking about concerns with more orgs paying $150,000+(or 120%+ of market rate as a semi-random number) salaries on regular basis, not paying people $80,000 or so. Obviously exceptions apply like I mentioned to Khorton below but it should be at least the point where everyone's (and their families/dependents) material needs can be met.
Do you have any thoughts on this? Because surely at some point salaries become excessive, have bad optics or counterfactually poor marginal returns but the challenge is identifying where this is.
( I'll update in my main body to be clearer as well)
Thanks for raising this and I totally agree with your point. I think I could have been clearer in two aspects of this:
Exceptions obviously apply. I'm not advocating for everyone getting paid a uniform amount or it being decided independent of personal circumstances. If people have circumstances or dependents which means they need additional income, they should obviously get it. So even with 'moderate' salaries at EA orgs I spoke about, I think both of the examples you should still get paid what they need.
Additionally I'm not talking about paying everyone "low" salaries, but rather "moderate" instead of potentially "high" in the future. I think I could have been more explicit but I'm talking about concerns with more orgs paying $150,000+ salaries, not paying people $80,000 or so. Obviously exceptions apply like I mentioned above but it should be at least the point where everyone's (and their families/dependents) material needs can be met.
Some things from EA Global London 2022 that stood out for me (I think someone else might have mentioned one of them):
An email to everyone promoting Will's new book (on longtermism)
Giving out free bookmarks about Will's book when picking up your pass.
These things might feel small but considering this is one of the main EA conferences, having the actual conference organisers associate so strongly with the promotion of a longtermist (albeit yes, also one of the main founders of EA) book made me think "Wow, CEA is really trying to push longtermism to attendees". This seems quite reasonable given the potential significance of the book, I just wonder if CEA have done this for any other worldview-focused books recently (last 1-3 years) or would do so in the future e.g. a new book on animal farming.
Curious to get someone else's take on this or if it just felt important in my head.
Other small things:
On the sidebar of the EA Forum, there's three recommended articles: Replacing Guilt, the EA Handbook (which as you mentioned here, is mostly focused on longtermism) and the most important century by Holden. Again, essentially 1.5 longtermist texts to <0.5 from other worldviews.
As the main landing page for EA discussion, this also feels like a reasonably significant nudge in a specific direction.
On a somewhat related point, I do generally think there are many less 'thought-leaders' for global health or animal-inclusive worldviews relative to the longtermist one. For example, we have people such as Holden, Ben Todd, Will McAskill etc. who all produce reasonably frequent and great content on why longtermism is compelling, yet very few (if anyone?) is doing content creation or thought leadership on that level for neartermist worldviews. This might be another reason why longtermist content is much more frequently sign-posted too, but I'm not 100% sure on this.
[FWIW I do find longtermism quite compelling, but it also seems amiss to not mention the cultural influence longtermism has in certain EA spaces]
I've just written a blog post summarising some of our recent research into the effectiveness of protest movements, plus some additional nuance and commentary that doesn’t fit neatly into external articles we recently published. Main things covered:
I think it would be great to have some directory of attempted but failed projects. Often I've thought "Oh I think X is a cool idea, but I bet someone more qualified has already tried it, and if it doesn't exist publicly then it must have failed" but I don't think this is often true (also see this shortform about the failure of the efficient market hypothesis for EA projects). Having a list of attempted but shut down (for whatever reason) projects might encourage people to start more projects, as we can really see how little of the idea space has been explored in practice.
There's a few helpful write-ups (e.g. shutting down the longtermist incubator) but in addition to detailed post-mortems, I would be keen to see a low-effort directory (AirTable or even Google Sheets?) of attempted projects, who tried, contact details (with permission), why it stopped, etc. If people are interested in this, I can make some preliminary spreadsheet that we can start populating, but other recommendations are of course welcome.
This is super interesting, thanks for doing this! One question: how did you decide to put the tags in the buckets you did? I'm wondering as some things seem fairly arbitrary, and by drawing different boundaries you might actually get quite different results. For example, I was just checking out your tags script and saw that you have things like nuclear security, nuclear winter, etc. in "Catastrophic risks" rather than in "long_term_risks_and_flourishing" although I would say it could also fit in the latter category. I think this is especially true for these two categories, as most things in "catastrophic risks" would fit neatly into "long-term risks" e.g biosecurity, great power conflict, etc. If this was the case, the number of existential risks-related Forum posts would be much higher than you indicate (although the trends might still be similar, even if the absolute values are different).
I appreciate this might be an annoying nitpick as the categories will always be subjective, but thought this might change the results somewhat.
(P.S I was trying to run an amended version of this myself to check for myself but had some problems with your code (apparently tags has no attribute tag_types). Agreed with David below though, it would be nice to have a dynamic version so others could more easily re-run your code with slightly varied tagging.)
One thing I can never figure out is where the missing Open Phil donations are! According to their own internal comms (e.g. this job advert) they gave away roughly $450 million in 2021. Yet when you look at their grants database, you only find about $350 million, which is a fair bit short. Any idea why this might be?
I think it could be something to do with contractor agreement (e.g. they gave $2.8 million to Kurzgesagt and said they don't tend to publish similar contractor agreements like these). Curious to see the breakdown of the other approx. $100 million though!
I think if you're looking to hire someone for this role, you might want to provide a lot more information about the role (expected hours, responsibilities, start date, salary, etc.). Currently there's virtually no information provided and I wouldn't expect you would find great and qualified candidates - which would be a shame given how useful this project could be!
Sorry I never replied but here's a very quick thing on what I thought our main disagreement was but maybe we're closer than I initially thought! I interpreted your conclusion to be something along the lines of "We shouldn't do any divestment as other approaches are less risky and more effective" but your final paragraph above is basically the view I hold too:
I do think there should be people trying divestment in the animal advocacy context and seeing how it goes, but unless the results proved us wrong, based on the arguments in this report, I wouldn't recommend a big shift of resources towards it.
Basically I totally agree, in that we should a couple campaigns/organisations trying divestment in a somewhat rigorous way to get some good learnings out of it, before deciding whether to stop it completely or scale up. I just think when I read your sentence:
We think that, given the existing evidence, many existing animal advocacy campaigns will be more effective and less risky than divestment.
I interpreted this as we shouldn't do it or invest in it at all! Not sure if it's just me but I think adding what you said above "some people should try it with some limited resources to test it properly" to the conclusion would really help with understanding your final recommendation. Thanks for all your work on this again - super interesting!
Thinking about your (b), encouraging pre-orders from young people who might switch to high-impact careers, I have a few preliminary thoughts:
A lot of young (and older) people are interested in climate change, with some (a lot?) of that being driven by concern for the future, and the lives of future generations.
Due to that, I think that climate-interested folks are a particularly good audience for this book, as they're already a) thinking altruistically, b) somewhat concerned about the future and c) generally young and able to pivot their career.
If we think this assumptions are true, the question is what forms of media do young/climate interested people engage with, and generally how do we reach them? Some ideas:
I would say a lot of people read the Guardian (and Open Phil sponsors some content there) so could be worth trying the Guardian environment section with sponsored advertisements. This could also be true for the Guardian more generally as I assume left-leaning people will be more interested in this relative to the average member of the publc.
You could also advertise via their podcasts, e.g Science Weekly
Our World in Data social media channels and website (although I'm sure you've already got this covered as Max Roser seems very supportive)
Loads of instagram advertising (I think this is more the young climate concerned demographic relative to Facebook)
Climate podcasts: TIL Climate, How to save a planet, For what it's earth, outrage + Optimism, the Climate Question
Extinction Rebellion UK has a mailing list of 200,000-300,000 people of reasonably engaged and passionate climate folks and there's a small (5-10%) chance I could get Will's book featured on it (message me if you're interested in this)
Very speculative: Go to some major festivals in the UK this summer (Glastonbury, Shambala, Green Gathering, etc.)
I'm not sure if it's exactly a feature suggestion as a concern highlighted here that I agree with, which is basically: The number of Forum users seem to be growing quite a lot (congrats!), with many more posts, so some posts that might be high-effort slip under the radar or disappear quite quickly (see Ian David Moss' comment). Is there anything the Forum team is doing to mitigate this (someone suggested a higher density of posts on the front page) or other wise any thoughts on this topic?
Other possible solutions (some already mentioned and I'm not sold of any of them) could be:
Sub-communities like Reddit
Greater emphasis on people using the Shortform feature for short or link posts rather than the main page
EA Librarian or Q&A things could go into a different section (somewhat like a Shortform? I'm quite unsure about this though)
Thanks Ben - this is much appreciated! Agreed there's still lots more work to be done to find these plausibly high-impact options, so fingers crossed we can make some decent headway. Likewise to you on Effective Self-Help, great to see your research on that front!
Thanks Emily, much appreciated! I also really enjoyed your recent work on interventions that influence animal product consumption so thanks for doing that.
For methodology, that's a good point and definitely something we should include more information on so will do that for an updated version in the near future. Not sure if you saw it but we do have a database of resources we compiled whilst doing this if you want to see the inputs.
On how we actually found the included pieces, this was a mix of methods, and we didn't do it in a systematic way akin to your work, although we might consider doing this in the future (suggestions welcome if you think is a good idea!). As we were mainly doing this for our own understanding and getting a lay of the land, we didn't think it was too crucial to do a systematic analysis (and our advisors also suggested this). But a few of the ways we did find papers:
Tools such as Google Scholar, ResearchRabbit and Elicit that helps find studies adjacent to your question or other studies you're interested in. We would use keyword searches such as "protest outcomes", "protest effectiveness", "impacts of protest", etc. for the outcomes, and similar variations of keywords for the success factors work. This is how we found the majority of the useful studies.
We looked at the research groups and prior publications of basically all the academics we found using the above method, which was especially useful to find newer papers and other academics who were newer in the field doing this work (e.g. just joined a relevant research group)
We interviewed 5 academics who had some influential papers in the various fields and asked them to recommend us the most important / key papers in the field which was useful to make sure we didn't miss anything crucial (we probably found 3-4 additional papers like this).
Someone else had conducted a systematic analysis (sadly not public) on an adjacent sub-field within social movements so we found some useful papers this way too.
Quite roughly, I'll outline some of the criteria we used:
We only included studies that utilised protest for things other than regime change (e.g. we didn't include Erica Chenoweth's famous work on toppling dictators as this isn't really relevant to the types of protest we're interested in)
We didn't include studies from protests prior to the 1960s. Even though this boundary is slightly fuzzy, we think the political context from prior to this time was too different to current times to be useful.
We focused primarily on empirical papers rather than theory-based ones, although we did include a small amount of theoretical papers to explain the mechanism behind some of the findings we observed
We included study designs using observational and experimental methods
As there's only one meta-analyses on this topic (from the 1980s) we included mainly primary research papers and didn't have the option to reply on meta-analyses or systematic reviews.
In reality, there weren't that many papers that fit all our criteria as this is a reasonably small and under-studied field, so we think we covered the vast majority of papers that fit our criteria above
Hi Dan - thanks for this! Definitely agree in that protest movements can be hits-based and most don't do much but the best ones can be hugely influential. That's definitely one of the hardest questions to resolve e.g. how do we predict which movements will fall into the latter bucket a priori, hence our work on identifying factors of successful movements. We're planning on doing some more work on this in the next few months so will keep you posted and definitely hope it's helpful to Giving Green!
I haven't read this fully (yet! will respond soon) but very quick clarification - Charity Entrepreneurship weren't talking about this as an organisation. Rather, there's a few different orgs with a bunch of individuals who use the CE office and happened to be talking about it (mostly animal people in this case). So I wouldn't expect CE's actual work to reflect that conversation given it only had one CE employee and 3 others who weren't!
Maybe I'm nosy but I would be keen to see some (I'm not sure how many is appropriate) applications for the FTX Future Fund on the forum, either as a main post or in shortforms to not clog up the main feed. Specifically maybe things that could a) be megaprojects down the line or b) had applications for around $500,000-$1m+. We've had one already but I'm sure there's lots more very interesting ones out there.
This is really interesting, thanks for this! In particular, it was really helpful comparing it to previous less-rigorously designed surveys, as I'm sure you expected pushback using those results. I had a few quite preliminary questions:
Do you think the effects of this could be different for different documentaries, and is this something you would consider testing in the future? Whilst in the paper you state that "Good For Us" uses psychological theory to make the documentary as compelling as possible to shift attitudes and behaviour, it feels quite hard to predict the emotional/attitudinal impact of a documentary. Some random thoughts I had was that maybe more sensationalist documentaries (What the Health, Cowspiracy, Dominion, etc.) could be more effective even though they ignore best practice, and it would be interesting to see how this stacks up against Good For Us. As these are touted as being the most effective/popular pro-animal documentaries, it would be interesting to see how these perform under the same controlled conditions.
Obviously whilst difficult to measure, do you think these documentaries might be important in shaping beliefs that later affect eating behaviour? A common analogy we hear is about "planting a seed" whereby one exposure to pro-animal content might not cause any behaviour change, but it primes them for later exposures which might then have more significant impacts on behaviour change. You talk about repeated exposures briefly in the paper but it would be interesting to hear your thoughts on how plausible you think this mechanism is (see point below)
If repeated exposures to pro-animal content might be effective, we still might expect there to be some significant changes in this study as it should be repeated exposure for some people (unless you screened them out) so maybe this point isn't so strong
Do you think there are other long-term mechanisms that might be at play here e.g. the documentary causes more animal-focused conversations with friends and family, which might cause behaviour change past the 12-day mark? Do you think a follow-up after 2-3 months (for example) would introduce too much noise to have strong causal evidence?
More broadly, what implications do you think this has for the farmed animal movement in terms of funding documentaries vs other interventions, and where do you think more work is needed?
This is a great post and (in my opinion) a super important topic - thanks for writing it up! We (at the Charity Entrepreneurship office) were actually talking about this today and funnily enough, made similar points you listed above why it might not be a problem (e.g. it's too infeasible to colonise space with animals). Generally though we agreed that it could be a big problem and it's not obvious how things are going to play out.
A potentially important thing we spoke about but isn't mentioned above is how aligned would future artificial general intelligence to the moral value of animals. AGI alignment is probably going to be affected by the moral values of the humans working on AI alignment, and there is a potential concern that a superintelligent AGI might have similar feelings towards animal welfare relative to most of the human population, which is largely indifference at their suffering. This might mean we design superintelligent AGI that is okay with using animals as resources within their calculations, rather than intelligent and emotional beings who have the capacity to suffer. This could, potentially, lead to factory farming scenarios worse than what we have today, as AGI would ruthlessly optimise for production with zero concern for animal welfare, which some farmers would at least consider nowadays. Not only could the moment-to-moment suffering of animals be potentially worse, this could be a stable state that is "locked-in" for long periods of time, depending on the dominance of this AGI and the values that created it. In essence, we could lock-in centuries (or longer) of intensely bad suffering for animals in some Orwellian scenario where AGI doesn't include animals as morally relevant actors.
There are obviously some other important factors that will drive the calculations of this AGI if/when designing or implementing food production systems, namely: cost of materials, accessibility, ability to scale, etc. This might mean that animal products are naturally a worse option relative to plant-based or cultivated counterparts but in the cases where it is more efficient to use animal-based products (which will also be improved in efficiency by AGI), the optimisation of this by AGI could be extremely concerning for animal suffering.
Obviously I'm not sure how likely this is to happen, but the outcome seems extremely bad so it's probably worth putting some thought into it, as I'm not sure what is happening currently. It was just a very distressing conclusion to come to that this could happen but I'm glad to see other people are thinking about this (and hopefully more will join!)
Thanks for this - has been very interesting to read and glad Animal Ask has been looking into this!
I've got a some pushback on your point that other campaigns within animal advocacy similarly serve the stigmatisation argument (which I think is central to your argument):
However, there are many other animal advocacy campaigns that involve a similar stigmatisation process such as veganism or reducetarianism, corporate campaigns, and policy change campaigns. Veganism and vegetarianism even work through the similar principle of a boycott, and the symbolism is largely the same. Moreover, since the arguments for the direct effects of these campaigns are much stronger, divestment appears to be a generally weaker campaign option in the animal advocacy context.
I'm not sure I agree with this. I think the stigmatisation provided by divestment campaigns is quite different to veganism (either as a boycott or outreach), corporate campaigns and policy change campaigns. For one, divestment campaigns are generally extremely targeted at the industry and a) making them look bad and/or b) making other institutions ashamed of working with them. Some reasons I don't think your examples do these elements very well:
Vegan or reducetarian outreach doesn't target the animal agriculture industry in the same way divestment does, in that it's focused on changing the minds of individuals. Whilst vegan outreach might make the industry look bad, I think this effect isn't that big, it's not the main goal and it certainly doesn't make other institutions ashamed of working with them. In the case of vegan outreach, the aim is often some combination of generating concern in one person for the environment, animal welfare and one's health, which (in my opinion) is usually done without any explicit stigmatisation of the animal ag industry. Often, animal welfare concerns aren't even the main reasons given to go vegan, as health and environmental concerns might dominate a lot more, as in recent years.
Even when vegan outreach strongly pushes animal welfare concerns for going vegan over health or environmental reasons, I feel like it sounds more like "do you care about animal welfare" rather than "This industry is responsible for the death of billions of sentient beings and destruction of our planet". I think the latter is much more likely to cause industry stigmatisation, yet is rarely ever implemented.
On people going vegan themselves (your boycott point): I'm not sure how this would significantly influence cultural perceptions of the animal ag industry as it's an invisible act of omission, in that society is not really monitoring individual diet preferences, whereas big visible acts of divestment are often covered widely in the media. In addition, divestment campaigns themselves, even if unsuccessful (in their stated aims), often garner lots of media coverage in a way that the corporate campaigns etc. fail to (in my opinion). I think this media attention is crucial to highlight the bad practices of the industry and therefore delegitimise it, and a gap that other animal advocacy methods aren't quite filling (also in my opinion).
I would argue the same points above are true for corporate campaigns and policy change issues in that:
They are not directly optimising for making the industry look bad, so will leave a lot of value on the table.
This seems especially true as in both cases, you essentially need industry support for the corporate campaign / policy to be realised, so you can't attack the industry too directly for fear creating a strong backlash.
Whilst corporate and policy change camapaigns probably generate more media coverage than vegan outreach, I think this is less than divestment-esque campaigns. One small data point is that Animal Rebellion's style of campaigning (more similar to the fossil free divestment movement) garnered 800+ media mentions in 2.5 years, and I think no corporate campaigns etc. have been on a similar level of public attention.
The example you give that I definitely agree with is undercover investigations, which do make the industry look bad and occasionally cause some institutions to withdraw their support. I just think this alone isn't enough, and we need more efforts to delegitimise the industry as a whole, but we don't currently have much of this happening.
P.S. I wrote this fairly quickly so might have missed some points and sorry if it's comes across as blunt, that's definitely not intended!
I agree with this! I guess my reasoning behind this post was that if EA is a movement that claims to do (impartial) good, and some other group does something great by our own metrics, how come we missed this? It seems like EA has a big mission of trying to do the most good, so surely we should always be looking for opportunities to do so?
Around 2018, I think there was comparatively much less activity in the EA climate world so I took this a sign that people must have updated in some way to thinking this was a more important problem to work on. A point that I didn't mention which might be true for Open Phil / Rethink is that growing concern for how climate change will affect global health and development could be a big factor, rather than the extreme tail risk scenarios.
I'm happy to answer your questions, we're working on our introduction post now so it'll be up by the end of next week hopefully. For the record, I didn't strong downvote your comment or "assert" anything but I'm not sure this conversation will be a productive dialogue anymore so I'll send you the document once we've finished it.
International mass movement lobbying against x-risks
Biorisk and Recovery from Catastrophe, Great Power Relations, Values and Reflective Processes
In recent years, there has been a dramatic growth in grassroots movements concerned about climate change, such as Fridays for Future and Extinction Rebellion. Some evidence implies that these movements might be instrumental in shifting public opinion around a topic, changing dominant narratives, influencing voting behaviour and affecting policymaker beliefs. Yet, there are many more pressing existential risks that receive comparatively little attention, such as nuclear security, unaligned AI, great power conflict, and more. We think an international movement focused on promoting key values, such as concern for future generations, and the importance of reducing existential risk, could have significant spillover effects into public opinion, policy, and the broader development of positive societal values. This could be a massively scalable project, with the potential to develop hubs in over 1000 cities across 100+ countries (approximately the same as Extinction Rebellion Global).
NB: I'm aware this might not be a good idea for biorisk due to infohazards.
Values and Reflective Processes, Research That Can Help Us Improve
If we want to motivate a broad spectrum of people about the importance of doing good and ensuring the long-term goes well, it's imperative we find out which messages are "sticky" and which ones are forgotten quickly. Testing various communication frames, particularly for key target audiences like highly talented students, will support EA outreach projects in better tailoring their messaging. Better communications could hugely increase the number of people that consume EA content, relate to the values of the EA movement, and ultimately commit their life to doing good. We'd be excited to see people testing various frames and messaging, across a range of target audiences, using methodologies such as surveys, focus groups, digital media, and more.
The amount of funding committed to Effective Altruism has grown dramatically in the past few years, with an estimated $46 billion dollars currently earmarked for EA. With this significant increase in available funding, there is now a greatly increased need for talented and thoughtful grantmakers, who can effectively deploy this money. It's plausible that yearly EA grantmaking could increase by a factor of 5-10x over the coming decade, and this requires finding and training new grantmakers on best practices, as well as developing sound judgement. We'd love to see projects that build the grantmaker pipeline, whether that's grantmaking fellowships, grantmaker mentoring, more frequent donor lotteries, more EA funds-style organisations with rotating fund managers, and more.
NB: This might be a refinement of fellowships, but I think it's particularly important.
I'm curious about your 1-10% as effective than the LTFF figure. Would you say it's that because you think AI safety is roughly 10-100x more pressing (important, neglected, tractable, etc.) than nuclear security, marginal reasons around NTI vs LTFF giving opportunities, or a fairly even mix of both?
Hi Saulius, thanks for your kind words! I do agree the longer-term ideas would be good to incorporate and I actually thought I put something about AI timelines in the alternative protein section but seems like I didn't. I definitely do agree something like AI within the next 50 years (which is plausible as the links you reference say) could massively speed up the development of low-cost alternative proteins so that should be a factor pushing it towards being more likely. On other ways that it would change the world to affect farmed animals, as you say, that definitely does seem more complicated so it would be interesting to get the take on someone who works on AI.
On other considerations around human extinction, global catastrophes and other events that could change the future of humanity in huge ways, I agree it definitely does make it harder to plan and it's not obvious what we should do in these cases. I think those cases probably a) warrant a lot more thought and b) seem much harder to design interventions for that will be robustly good. As Martin and you talk about below, it seems extremely challenging to predict good solutions for potentially very different futures whereas making the next 50 years go well for animals seems comparatively easier, and I generally believe making the next 50 years go well will be good for the next 500-5,000 years too (although this might not always be true).
I guess to clarify some of your points, is it that medium-term strategy may be unimportant as things could change very significantly, so we should try find ways to steer these future scenarios in ways that are conducive to good animal welfare (e.g. make sure ALLFED isn't proposing insects etc.)?
At Social Change Lab, we're conducting some research trying to understand the impacts of protest movements, to inform whether various EA cause areas (e.g. animal advocacy, biosecurity, climate, etc.) should utilise protest as an effective strategy for change. We're doing an informal survey to understand the current uncertainties EAs have around protest and what forms of evidence people find the most compelling, as this will inform our research priorities.
So I would be very grateful if people would be up for completing this 2-3 minute survey on current attitudes and understanding of protest movements. Thank you!
I've seen surprisingly little talk about the Open Philanthropy Regranting Challenge here or on other EA discussions forums. In short, they want to give away $150 million to other foundations working on human health, economic development and climate change, to roughly double the grantmaking of other effective foundations. This seems interesting for several reasons:
It could be quite high leverage to find/recommend foundations that meet their criteria (e.g. they give over $10million/year)
It's the first case of an EA foundation doing this and generally this seems quite rare within the grant-making space. Seems like Open Phil is really embodying their principle of hits-based giving (as well as their commitment to learning/improving).
This seems to be the biggest / first major foray that Open Phil is making into climate change to my knowledge and I'm wondering what spurred this. Seems to be coming more from a global development standpoint based on the other focus areas, as opposed to an existential risk angle. Could have been influenced by the other major donor (see below).
It's the first time Open Phil has mentioned major donors besides Cari Tuna and Dustin Moskovitz, by saying that Lucinda Southworth was contributed to this too. Makes me think how many major donors of this size Open Phil is working with, and if it's now part of their strategy to find more billionaire-sized donors.
What do other people think of this? Any particular foundations that people would want Open Phil to consider strongly for this?
There was quite an interesting survey commissioned by YouGov in the UK on reasons for veganism/vegetarianism, as well as some questions around alternative proteins and eating insects.
Concern for animals seems to be the dominating reason for people going vegan and veggie, although environmental concerns are also high.
These reasons become broader after going vegan e.g. people develop a wider range of reasons for staying vegan compared to the original reason they went vegan (concern for the environment seems to rise the most).
Surprisingly, 23% of vegans purchase new fur products. I'm not really sure what to make of this as this is literally against the standard definition of veganism.
35% of vegans and 42% of vegetarians think it's unacceptable for vegans and veggies to lab-grown meat. This seems really high and I'm not sure why people feel this way
5% of vegans think its okay for vegans to eat insects, which seems much lower than the fur question but still a bit odd imo.
Formatting point - your link for 'the long reflection' seems to be broken here:
Again, I wish to recognise that many community leaders strongly support steering – e.g., by promoting ideas like ‘moral uncertainty’ and ‘the long reflection’ or via specific community-building activities.
We, Effective Environmentalism, are organising more upcoming talks from those tackling climate change using an EA or EA-adjacent approach. We've got three quite exciting talks (one rescheduled from the last round) lined up over the next three months so if anyone is interested in learning more, do sign up below. You can also see previous talks on our YouTube Channel and sign up to our newsletter (+ see other ways to get involved) here.
Sunday, January 23rd, 6-7pm GMT - Good news on climate change + what is a worst case scenario? By Dr John Halstead from Forethought Foundation. Sign up here
In this talk, John will firstly discuss some good news on climate change: on current policy, emissions look set to be lower than once feared, as is the risk of very high climate sensitivity. Secondly, John will discuss a worst-case scenario in which we burn all of the fossil fuels: how many fossil fuels are there, how likely we are to burn them, how we might do so if we did, the warming that would produce, and what that might mean for life on Earth.
Saturday, February 5th, 6:30-7:30pm GMT - The role of carbon removal in achieving climate goals - by Noah Deich, President and co-founder of Carbon180. Sign up here.
During the presentation, Noah Deich, President and co-founder of Carbon180, will talk about the role for carbon removal in achieving our climate goals, what solutions hold the most promise, and how civil society can influence the necessary policy changes for bringing carbon removal to scale in a beneficial way.
Sunday, March 13th, 7-8pm GMT - Electricity production & use in decarbonisation scenarios - by Matthew Dahlhausen from the National Renewable Energy Laboratory. Sign up here.
This presentation will go over basic and intermediate energy literacy, covering the electric grid, building energy services, and challenges in full decarbonisation scenarios. It will address common misconceptions around energy and electricity consumption, as well as barriers to full decarbonisation.
We're always looking for new speakers so if you might be interested or have any suggestions for potentially interesting speakers, please comment below and let me know!
For what it's worth, I wasn't genuinely saying we should hold a citizen's assembly to decide what we do with all of Open Phil's money, I just thought it was an interesting thought experiment. I'm not sure I agree that the pre-setting of the aims of an assembly is undemocratic, however, as surely all citizen's assemblies need an initial question to start from? That seems to have been the case for previous assemblies (climate, abortion, etc.).
To play devil's advocate, I'm not sure your points about the average global citizen being homophobic, religious, socialist, etc., actually matter that much when it comes to people deciding where they should allocate funding for existential risk. I can't see any relationship between beliefs in which existential risks are the most severe and queer people, religion or their willingness to pay carbon taxes (assuming the pot of funding they allocate is fixed and doesn't affect their taxes).
Also, I don't think you've given much convincing evidence that a citizen's assemblies would lead to funding for key issues falling a fair amount vs decisions by OP program officers, besides your intuition. I can't say I have much evidence myself except for the studies (1, 2, 3 to a degree) provided in the report, would suggest the exact opposite, in that a diverse group of actors performs better than an higher-ability solo actor. In addition, if we base the success of the citizen's assembly on how well they match our current decisions (e.g. the same amount of biorisk, nuclear and AI funding), I think we're missing the point a bit. This assumes we've got it all perfectly allocated currently which I think is a central challenge of the paper above, in that it's probably allocated perfectly according to a select few people but this by no means leads to it actually being true.
I think your Open Phil example could be an interesting experiment. Do you think that if Open Phil commissions a citizen's assembly to allocate their existential risk spending and the input is given by their researchers / program officers, it would be wildly different to what they would do themselves?
In any scenario, I think it would be quite interesting as surely if our worldviews and reasoning are strong enough to claim big unusual things (e.g. strong longtermism) we should be able to convince a random group of people that they hold? and if not, is that a problem with the people selected, our communication skills or the thinking itself? I personally don't think it would be a problem with the people (see past successes of citizen's assemblies)* so shouldn't we be testing our theories to see if they make sense under different worldviews and demographic backgrounds? and if they don't seem robust to other people, we should probably try integrate the reasons why (within reason of course).
*there's probably some arguments to be made here that we don't necessarily expect the allocation from this representative group, even when informed perfectly by experts, to be the optimal allocation of resources so we're not maximising utility / doing the most good. This is probably true but I guess the balance of this with moral uncertainty is the trade-off we have to live with? Quite unsure on this though, seems fuzzy
You say that decisions about which risks to take should be made democratically. The implication of this seems to be that everyone, and not just EAs, who is aiming to do good with their resources should donate only to their own government. Their govt could then decide how to spend the money democratically.
I'm not fully sure that deciding which risks to take seriously in a democratic fashion logically leads to donating all of your money to the government. Some reasons I think this:
That implies that we all think our governments are well-functioning democracies but I (amongst many others) don't believe that to be true. I think it's fairly common sentiment and knowledge that political myopia by politicians, vested interests and other influences mean that governments don't implement policies that are best for their populations.
As I mentioned in another comment, I think the authors are saying that as existential risks affect the entirety of humanity in a unique way, this is one particular area where we should be deciding things more democratically. This isn't necessarily the case for spending on education, healthcare, animal welfare, etc, so there it would make sense you donate to institutions that you believe are more effective and the bar for democratic input is lower. The quote from the paper that makes me think this is:
Tying the study of a topic that fundamentally affects the whole of humanity to a niche belief system championed mainly by an unrepresentative, powerful minority of the world is undemocratic and philosophically tenuous.
Thirdly, I think this point is weaker but most political parties aren't elected by the majority of the population in the country. One cherry picked example is that only 45% of UK voters voted for the Conservative party and we only had a 67% election turnout, meaning that most of the country didn't actually vote for the winning party. It then seems odd that if you think the outcome would have been different given a higher voter turnout (closer to "true democracy"), you would give all your donations to the winning party.
Note - I don't necessarily agree with the premise we should prioritise risks democratically but I also don't think what you've said re donating all of our money to the government is the logical conclusion from that statement.