Posts

Disruptive climate protests in the UK didn’t lead to a loss of public support for climate policies 2022-06-20T15:07:56.683Z
Megaprojects for animals 2022-06-13T09:20:59.871Z
The dangers of high salaries within EA organisations 2022-06-10T07:54:48.176Z
Initial research on social movements & protest 2022-03-28T14:02:25.351Z
When did EA miss a great opportunity to do good? 2022-03-09T10:50:25.203Z
Some thoughts on recent Effective Altruism funding announcements 2022-03-03T15:53:56.373Z
Potential Theories of Change for the Animal Advocacy movement 2022-02-09T22:13:50.425Z
A case for the effectiveness of protest 2021-11-29T11:50:08.321Z
[Linkpost] GiveWell money moved in 2020 - up by 60%! 2021-11-13T17:25:20.209Z
Upcoming Effective Environmentalism talks 2021-11-06T12:06:48.767Z
Analysis of EA funding within Animal Welfare from 2019-2021 2021-09-27T19:03:09.860Z
JamesOz's Shortform 2021-04-08T16:51:03.759Z
How we averted 130,000 animal deaths (in expectation) with a volunteer campaign. 2021-04-05T07:50:01.147Z

Comments

Comment by James Ozden (JamesOz) on Disruptive climate protests in the UK didn’t lead to a loss of public support for climate policies · 2022-06-20T21:12:08.919Z · EA · GW

Thanks Johannes!

We thought about doing this but ruled it out as there would be a pretty clear bias e.g. the people who are most likely to hear about Just Stop Oil are people who are climate-conscious already, and are therefore more susceptible to positive shifts. I think we did do this informally, and did find a positive correlation between knowledge of Just Stop Oil and the constructs, but I don't think it's particularly robust. 

Thinking out loud but maybe one way to control for this might be doing this within groups of people who answered the same to "How concerned are you about climate change" in the first survey, although this might make our sample sizes quite small / no longer representative. 

Comment by James Ozden (JamesOz) on How accurate are Open Phil's predictions? · 2022-06-17T16:58:25.103Z · EA · GW

I'm probably missing something but doesn't the graph show OP is under-confident in the 0-10 and 10-20 bins? e.g. those data points are above the dotted grey line of perfect calibration where the 90%+ bin is far below?

Comment by James Ozden (JamesOz) on The dangers of high salaries within EA organisations · 2022-06-11T07:51:21.092Z · EA · GW

I totally agree - like I said above, I don't think paying above market rate is necessarily erroneous, but I was just responding to Khorton's question of how many EA orgs actually paid above market rate. And as you point out, attracting top talent to tackle important research questions is very important and I definitely agree that this is main perk of paying higher salaries.

In this case of research, I also agree! Academic salaries are far too low and benchmarking to academia isn't even necessarily the best reference class (as one could potentially do research in the private sector and get paid much more). 

Comment by James Ozden (JamesOz) on The dangers of high salaries within EA organisations · 2022-06-10T21:42:51.050Z · EA · GW

Hey Stefan, thanks again for this response and will respond with the attention it deserves!

I think there are non-trivial numbers of highly committed effective altruists - who would make very careful decisions regarding what research questions to prioritise and tackle, and who would be very careful about hiring decisions - who would not be willing to work for a low salary.

I definitely agree, and I talk about this in my piece as well e.g. in the introduction I say "There are clear benefits e.g. attracting high-calibre individuals that would otherwise be pursuing less altruistic jobs, which is obviously great." So I don't think we're in disagreement about this, but rather I'm questioning where the line should be drawn, as there must be some considerations to stop us raising salaries indefinitely. Furthermore, in my diagrams you can see that there are similarly altruistic people that would only be willing to work at higher salaries (the shaded area below).

Conversely, I think there are many people who, e.g. come from the larger non-profit or do-gooding world would be willing to work for a low salary, but who wouldn't be very committed to effective altruist principles.

This is an interesting point and one I didn't consider. I find this slightly hard to believe as I imagine EA as being quite esoteric (e.g. full of weird moral views) so struggle to imagine many people would be clambering to work for an organisation focused on wild animal welfare or AI safety when they could work for an issue they cared about more (e.g. climate change) for a similar salary.

So I don't think we have any particular reason to expect that lower salaries would be the most effective way of ensuring that decisions about, e.g. research prioritisation or hiring are value-aligned. That is particularly so since, as you notice in the introduction, lower salaries have other downsides.

Again, I would agree thats it's not the most effective way of ensuring value alignment within organisations, but I would say it's an important factor.

For instance, in research on the general population led by Lucius Caviola, we found a relatively weak correlation between what we call "expansive altruism" (willingness to give resources to others, including distant others) and "effectiveness-focus" (willingness to choose the most effective ways of helping others). Expansive altruism isn't precisely the same thing as willingness to work for a low salary, and things may look a bit differently among potential applicants to effective altruist jobs - but it nevertheless suggests that willingness to work for a low salary need not be as useful a costly signal as it may seem.

This was actually really useful for me and I would definitely say I was generally conflating "willingness to work for a lower salary" with "value-alignment". I've probably updated more towards your view in that "effectiveness-focus" is a crucial component of EA that wouldn't be selected for simply by being willing to take a lower salary, which might more accurately map to "expansive altruism".

For these reasons, I think it's better for EA recruiters to try to gauge, e.g. inclinations towards cause-neutrality, willingness to overcome motivated reasoning, and other important effective altruist traits, directly, rather than to try to infer them via their willingness to accept a low salary - since those inferences will typically not have a high degree of accuracy.

I agree this is probably the best outcome and certainly what I would like to happen, but I also think it's challenging. Posts such as Vultures Are Circling highlight people trying to "game" the system in order to access EA funding, and I think this problem will only grow. Therefore I think EA recruiters might face difficulty in discerning between 7/10 EA-aligned and 8/10 EA-aligned, which I think could be important on a community level. Maybe I'm overplaying the problem that EA recruiters face and it's actually extremely easy to discern values using various recruitment processes, but I think this is unlikely.

Comment by James Ozden (JamesOz) on The dangers of high salaries within EA organisations · 2022-06-10T21:23:06.904Z · EA · GW

Thanks for the correction - I'll edit this in the comment above as I agree my phrasing was too weak. Apologies as I didn't mean to underplay the significance of the pay cut and financial sacrifice yourself and others took - I think it's substantial (and inspiring). 

Comment by James Ozden (JamesOz) on The dangers of high salaries within EA organisations · 2022-06-10T19:06:04.334Z · EA · GW

Yeah this is a useful way of thinking about this issue of market rate so thanks for this! I guess I think people having the ability to earn more in non-EA orgs relative to EA roles is  true for some people, and potentially most people, but also think it's context dependent.

For example, I've spoken with a reasonable number of early career EAs (in the UK) for whom working at EA orgs is actually probably the highest paying options available to them (or very close), relative to what they could reasonably get hired for. So whilst I think it's true for some EAs that EA jobs offer less* pay relative to their other options, I don't think it's universal. I can imagine you might agree so the question might be - how much of the community does it represent? and is it uniform? So maybe to clarify, I think that EA orgs are paying more than I would expect for certain skillsets, e.g. junior-ish ops people, rather than across the board.

*edited due to comment below 

Comment by James Ozden (JamesOz) on The dangers of high salaries within EA organisations · 2022-06-10T18:54:03.357Z · EA · GW

Ah yes that's definitely fair, sorry if I was misrepresenting RP!  I wasn't referring to intra-organisation when I made that comment, but I was thinking more across organisations like The Humane League / ACE vs 80K/CEA.

Comment by James Ozden (JamesOz) on The dangers of high salaries within EA organisations · 2022-06-10T16:33:51.049Z · EA · GW

Yeah I think this is a good question. I can think of several of the main EA orgs that do this, in particular for roles around operations and research (which aren't generally paid that well in the private sector, unless you're doing it at a FAANG company etc). In addition, community-building pays much higher than other non-profit community building  (in the absence of much private  sector community building). 

Some of these comparisons also feel hard because people often do roles at EA orgs they weren't doing in the private sector e.g. going from consulting or software development to EA research, where you would be earning less than your previous market rate probably but not the market rate for your new job.

There's one example comparison here and to clarify I think this is most true for more meta/longtermist organisations, as salaries within animal welfare (for example) are still quite low IMO. I can think of  3-4 different roles within the past 2 months that pay what is above market rate (in my opinion), some of which I'll list below:

  • 80K paying £58,400 for an operations specialist with one year of experience doing ops. For context, a friend of mine did project management for 2-3 years at a City law firm and was making £40-50k
  • Rethink Priorities paying $65,000 or £52k for a research assistant. This definitely feels higher than academic research assistants and probably private sector ones too (although not sure what a good reference class is)
  • Open Phil paying $100,000+ for an operations associate.
  • CEA expression of interest for a people operations specialist (sounds like a somewhat junior role, I could be wrong) - salary of £56-68,000. Similar to the 80K private sector comparison, I think market rate for this would be closer to £40k for a junior role.
  • As per Cynwit's comment:  Office Manager - New York EA Hub :  $85,000 - $100,000 and Office Manager Salaries in New York from Glassdoor: ~$55,000

(not implying these are bad calls, but that I think they're above market rate)

Comment by James Ozden (JamesOz) on The dangers of high salaries within EA organisations · 2022-06-10T16:05:13.657Z · EA · GW

Thanks for the thoughtful engagement Stefan and kind words! I'm going to respond to the rest of your points in full later but just one quick clarification I wanted to make which might mean we're not so dissimilar on our viewpoints.

As far as I understand, you are effectively saying that effective altruists should pay low salaries

Just want to be very clear that low salaries is not what I think EA orgs should pay! I tried quite clearly to use the term 'moderate' rather than low because I don't think paying low salaries is good (for reasons you and I both mentioned).  I could have been more explicit but I'm talking about concerns with more orgs paying $150,000+(or 120%+ of market rate as a semi-random number) salaries on regular basis, not paying people $80,000 or so. Obviously exceptions apply like I mentioned to Khorton below but it should be at least the point where everyone's (and their families/dependents) material needs can be met.

Do you have any thoughts on this? Because surely at some point salaries become excessive, have bad optics or counterfactually poor marginal returns but the challenge is identifying where this is.

( I'll update in my main body to be clearer as well)

Comment by James Ozden (JamesOz) on The dangers of high salaries within EA organisations · 2022-06-10T15:52:26.875Z · EA · GW

Thanks for raising this and I totally agree with your point. I think I could have been clearer in two aspects of this:

  • Exceptions obviously apply. I'm not advocating for everyone getting paid a uniform amount or it being decided independent of personal circumstances. If people have circumstances or dependents which means they need additional income, they should obviously get it. So even with 'moderate' salaries at EA orgs I spoke about, I think both of the examples you should still get paid what they need.
  • Additionally I'm not talking about paying everyone "low" salaries, but rather "moderate" instead of potentially "high" in the future. I think I could have been more explicit but I'm talking about concerns with more orgs paying $150,000+ salaries, not paying people $80,000 or so. Obviously exceptions apply like I mentioned above but it should be at least the point where everyone's (and their families/dependents) material needs can be met.
Comment by James Ozden (JamesOz) on EA is more than longtermism · 2022-05-04T18:12:57.468Z · EA · GW

Some things from EA Global London 2022 that stood out for me (I think someone else might have mentioned one of them):

  • An email to everyone promoting Will's new book (on longtermism)
  • Giving out free bookmarks about Will's book when picking up your pass.

These things might feel small but considering this is one of the main EA conferences, having the actual conference organisers associate so strongly with the promotion of a longtermist (albeit yes, also one of the main founders of EA) book made me think "Wow, CEA is really trying to push longtermism to attendees". This seems quite reasonable given the potential significance of the book, I just wonder if CEA have done this for any other worldview-focused books recently (last 1-3 years) or would do so in the future e.g. a  new book on animal farming.

Curious to get someone else's take on this or if it just felt important in my head.

Other small things:

  • On the sidebar of the EA Forum, there's three recommended articles: Replacing Guilt, the EA Handbook (which as you mentioned here, is mostly focused on longtermism) and the most important century by Holden. Again, essentially 1.5 longtermist texts to <0.5 from other worldviews.

As the main landing page for EA discussion, this also feels like a reasonably significant nudge in a specific direction.

On a somewhat related point, I do generally think there are many less 'thought-leaders' for global health or animal-inclusive worldviews relative to the longtermist one. For example, we have people such as Holden, Ben Todd, Will McAskill etc. who all produce reasonably frequent and great content on why longtermism is compelling, yet very few (if anyone?) is doing content creation or thought leadership on that level for neartermist worldviews. This might be another reason why longtermist content is much more frequently sign-posted too, but I'm not 100% sure on this. 

[FWIW I do find longtermism quite compelling, but it also seems amiss to not mention the cultural influence longtermism has in certain EA spaces]

Comment by James Ozden (JamesOz) on JamesOz's Shortform · 2022-04-27T16:04:29.549Z · EA · GW

[From my blog, Understanding Social Change]

I've just written a blog post summarising some of our recent research into the effectiveness of protest movements, plus some additional nuance and commentary that doesn’t fit neatly into external articles we recently published. Main things covered:

Comment by James Ozden (JamesOz) on How many EAs failed in high risk, high reward projects? · 2022-04-26T13:33:47.588Z · EA · GW

I think it would be great to have some directory of attempted but failed projects. Often I've thought "Oh I think X is a cool idea, but I bet someone more qualified has already tried it, and if it doesn't exist publicly then it must have failed" but I don't think this is often true (also see this shortform about the failure of the efficient market hypothesis for EA projects). Having a list of attempted but shut down (for whatever reason) projects might encourage people to start more projects, as we can really see how little of the idea space has been explored in practice.

 

There's a few helpful write-ups (e.g. shutting down the longtermist incubator) but in addition to detailed post-mortems, I would be keen to see a low-effort directory (AirTable or even Google Sheets?) of attempted projects, who tried, contact details (with permission), why it stopped, etc. If people are interested in this, I can make some preliminary spreadsheet that we can start populating, but other recommendations are of course welcome.

Comment by James Ozden (JamesOz) on EA Forum's interest in cause-areas over time and other statistics · 2022-04-10T20:58:31.085Z · EA · GW

This is super interesting, thanks for doing this! One question: how did you decide to put the tags in the buckets you did? I'm wondering as some things seem fairly arbitrary, and by drawing different boundaries you might actually get quite different results. For example,  I was just checking out your tags script and saw that you have things like nuclear security, nuclear winter, etc. in "Catastrophic risks" rather than in "long_term_risks_and_flourishing" although I would say it could also fit in the latter category. I think this is especially true for these two categories, as most things in "catastrophic risks" would fit neatly into "long-term risks" e.g biosecurity, great power conflict, etc. If this was the case, the number of existential risks-related Forum posts would be much higher than you indicate (although the trends might still be similar, even if the absolute values are different). 

I appreciate this might be an annoying nitpick as the categories will always be subjective, but thought this might change the results somewhat.

(P.S I was trying to run an amended version of this myself to check for myself but had some problems with your code (apparently tags has no attribute tag_types).  Agreed with David below though, it would be nice to have a dynamic version so others could more easily re-run your code with slightly varied tagging.)

Comment by James Ozden (JamesOz) on NunoSempere's Shortform · 2022-04-09T07:13:13.422Z · EA · GW

One thing I can never figure out is where the missing Open Phil donations are! According to their own internal comms (e.g. this job advert) they gave away roughly $450 million in 2021. Yet when you look at their grants database, you only find about $350 million, which is a fair bit short. Any idea why this might be? 

I think it could be something to do with contractor agreement (e.g. they gave $2.8 million to Kurzgesagt and said they don't tend to publish similar contractor agreements like these). Curious to see the breakdown of the other approx. $100 million though!

Comment by James Ozden (JamesOz) on Case for emergency response teams · 2022-04-05T20:47:26.249Z · EA · GW

I think if you're looking to hire someone for this role, you might want to provide a lot more information about the role (expected hours, responsibilities, start date, salary, etc.). Currently there's virtually no information provided and I wouldn't expect you would find great and qualified candidates - which would be a shame given how useful this project could be!

Comment by James Ozden (JamesOz) on Divestment From Animal Agriculture: What Does It Achieve? · 2022-04-01T10:44:58.623Z · EA · GW

Sorry I never replied but here's a very quick thing on what I thought our main disagreement was but maybe we're closer than I initially thought! I interpreted your conclusion to be something along the lines of "We shouldn't do any divestment as other approaches are less risky and more effective" but your final paragraph above is basically the view I hold too:

I do think there should be people trying divestment in the animal advocacy context and seeing how it goes, but unless the results proved us wrong, based on the arguments in this report, I wouldn't recommend a big shift of resources towards it.

Basically I totally agree, in that we should a couple campaigns/organisations trying divestment in a somewhat rigorous way to get some good learnings out of it, before deciding whether to stop it completely or  scale up. I just think when I read your sentence:

We think that, given the existing evidence, many existing animal advocacy campaigns will be more effective and less risky than divestment.

I interpreted this as we shouldn't do it or invest in it at all! Not sure if it's just me but I think adding what you said above "some people should try it with some limited resources to test it properly"  to the conclusion would really help with understanding your final recommendation. Thanks for all your work on this again - super interesting!
 

Comment by James Ozden (JamesOz) on What are great marketing ideas to encourage pre-orders of What We Owe The Future? · 2022-04-01T09:08:03.615Z · EA · GW

Thinking about your (b), encouraging pre-orders from young people who might switch to high-impact careers, I have a few preliminary thoughts:

  • A lot of young (and older) people are interested in climate change, with some (a lot?) of that being driven by concern for the future, and the lives of future generations.
  • Due to that, I think that climate-interested folks are a particularly good audience for this book, as they're already a) thinking altruistically, b) somewhat concerned about the future and c) generally young and able to pivot their career.
  • If we think this assumptions are true, the question is what forms of media do young/climate interested people engage with, and generally how do we reach them? Some ideas:
    • I would say a lot of people read the Guardian (and Open Phil sponsors some content there) so could be worth trying the Guardian environment section with sponsored advertisements. This could also be true for the Guardian more generally as I assume left-leaning people will be more interested in this relative to the average member of the publc.
      • You could also advertise via their podcasts, e.g Science Weekly
    • Our World in Data social media channels and website (although I'm sure you've already got this covered as Max Roser seems very supportive)
    • Loads of instagram advertising (I think this is more the young climate concerned demographic relative to Facebook)
    • Climate podcasts: TIL Climate, How to save a planet, For what it's earth, outrage + Optimism, the Climate Question
    • Extinction Rebellion UK has a mailing list of 200,000-300,000 people of reasonably engaged and passionate climate folks and there's a small (5-10%) chance I could get Will's book featured on it (message me if you're interested in this)
    • Very speculative: Go to some major festivals in the UK this summer (Glastonbury, Shambala, Green Gathering, etc.)
Comment by James Ozden (JamesOz) on EA Forum feature suggestion thread · 2022-03-30T16:57:55.837Z · EA · GW

I'm not sure if it's exactly a feature suggestion as a concern highlighted here that I agree with, which is basically: The number of Forum users seem to be growing quite a lot (congrats!), with many more posts, so some posts that might be high-effort slip under the radar or disappear quite quickly (see Ian David Moss' comment). Is there anything the Forum team is doing to mitigate this (someone suggested a higher density of posts on the front page) or other wise any thoughts on this topic? 

Other possible solutions (some already mentioned and I'm not sold of any of them) could be:

  • Sub-communities like Reddit
  • Greater emphasis on people using the Shortform feature for short or link posts rather than the main page
  • EA Librarian or Q&A things could go into a different section (somewhat like a Shortform? I'm quite unsure about this though)
Comment by James Ozden (JamesOz) on Initial research on social movements & protest · 2022-03-29T11:27:39.428Z · EA · GW

Thanks Ben - this is much appreciated! Agreed there's still lots more work to be done to find these plausibly high-impact options, so fingers crossed we can make some decent headway. Likewise to you on Effective Self-Help, great to see your research on that front!

Comment by James Ozden (JamesOz) on Initial research on social movements & protest · 2022-03-29T11:24:49.367Z · EA · GW

Thanks Emily, much appreciated! I also really enjoyed your recent work on interventions that influence animal product consumption so thanks for doing that.

For methodology, that's a good point and definitely something we should include more information on so will do that for an updated version in the near future. Not sure if you saw it but we do have a database of resources we compiled whilst doing this if you want to see the inputs. 

On how we actually found the included pieces, this was a mix of methods, and we didn't do it in a systematic way akin to your work, although we might consider doing this in the future (suggestions welcome if you think is a good idea!). As we were mainly doing this for our own understanding and getting a lay of the land, we didn't think it was too crucial to do a systematic analysis (and our advisors also suggested this). But a few of the ways we did find papers:

  • Tools such as Google Scholar, ResearchRabbit and Elicit that helps find studies adjacent to your question or other studies you're interested in. We would use keyword searches such as "protest outcomes", "protest effectiveness", "impacts of protest", etc. for the outcomes, and similar variations of keywords for the success factors work. This is how we found the majority of the useful studies.
  • We looked at the research groups and prior publications of basically all the academics we found using the above method, which was especially useful to find newer papers and other academics who were newer in the field doing this work (e.g. just joined a relevant research group)
  • I read two academic-focused books on the relevant topics (How Social Movements Matter and Prisms of the People), Sam read 1-2 similar books, and we found literature via that
  • We interviewed 5 academics who had some influential papers in the various fields and asked them to recommend us the most important / key papers in the field which was useful to make sure we didn't miss anything crucial (we probably found 3-4 additional papers like  this).
  • Someone else had conducted a systematic analysis (sadly not public) on an adjacent sub-field within social movements so we found some useful papers this way too.

 

Quite roughly, I'll outline some of the criteria we used:

  • We only included studies that utilised protest for things other than regime change (e.g. we didn't include Erica Chenoweth's famous work on toppling dictators as this isn't really relevant to the types of protest we're interested in)
  • We didn't include studies from protests prior to the 1960s. Even though this boundary is slightly fuzzy, we think the political context from prior to this time was too different to current times to be useful.
  • We focused primarily on empirical papers rather than theory-based ones, although we did include a small amount of theoretical papers to explain the mechanism behind some of the findings we observed
  • We included study designs using observational and experimental methods
  • As there's only one meta-analyses on this topic (from the 1980s) we included mainly primary research papers and didn't have the option to reply on meta-analyses or systematic reviews.
  • In reality, there weren't that many papers that fit all our criteria as this is a reasonably small and under-studied field, so we think we covered the vast majority of papers that fit our criteria above
Comment by James Ozden (JamesOz) on Initial research on social movements & protest · 2022-03-29T10:55:23.031Z · EA · GW

Hi Dan - thanks for this! Definitely agree in that protest movements can be hits-based and most don't do much but the best ones can be hugely influential. That's definitely one of the hardest questions to resolve e.g. how do we predict which movements will fall into the latter bucket a priori, hence our work on identifying factors of successful movements. We're planning on doing some more work on this in the next few months so will keep you posted and definitely hope it's helpful to Giving Green!

Comment by James Ozden (JamesOz) on Who is protecting animals in the long-term future? · 2022-03-23T12:29:44.860Z · EA · GW

I haven't read this fully (yet! will respond soon) but very quick clarification - Charity Entrepreneurship weren't talking about this as an organisation. Rather, there's a few different orgs with a bunch of individuals who use the CE office and happened to be talking about it (mostly animal people in this case). So I wouldn't expect CE's actual work to reflect that conversation given it only had one CE employee and 3 others who weren't!

Comment by James Ozden (JamesOz) on JamesOz's Shortform · 2022-03-22T21:27:04.615Z · EA · GW

Maybe I'm nosy but I would be keen to see some (I'm not sure how many is appropriate) applications for the FTX Future Fund on the forum, either as a main post or in shortforms to not clog up the main feed. Specifically maybe things that could a) be megaprojects down the line or b) had applications for around $500,000-$1m+. We've had one already but I'm sure there's lots more very interesting ones out there.

Comment by James Ozden (JamesOz) on Effectiveness of a theory-informed documentary to reduce consumption of meat and animal products: three randomized controlled experiments · 2022-03-21T18:19:28.662Z · EA · GW

This is really interesting, thanks for this! In particular, it was really helpful comparing it to previous less-rigorously designed surveys, as I'm sure you expected pushback using those results. I had a few quite preliminary questions:

  • Do you think the effects of this could be different for different documentaries, and is this something you would consider testing in the future? Whilst in the paper you state that "Good For Us" uses psychological theory to make the documentary as compelling as possible to shift attitudes and behaviour, it feels quite hard to predict the emotional/attitudinal impact of a documentary. Some random thoughts I had was that maybe more sensationalist documentaries (What the Health, Cowspiracy, Dominion, etc.) could be more effective even though they ignore best practice, and it would be interesting to see how this stacks up against Good For Us. As these are touted as being the most effective/popular pro-animal documentaries, it would be interesting to see how these perform under the same controlled conditions.
  • Obviously whilst difficult to measure, do you think these documentaries might be important in shaping beliefs that later affect eating behaviour? A common analogy we hear is about "planting a seed" whereby one exposure to pro-animal content might not cause any behaviour change, but it primes them for later exposures which might then have more significant impacts on behaviour change. You talk about repeated exposures briefly in the paper but it would be interesting to hear your thoughts on how plausible you think this mechanism is (see point below)
    • If repeated exposures to pro-animal content might be effective, we still might expect there to be some significant changes in this study as it should be repeated exposure for some people (unless you screened them out) so maybe this point isn't so strong
  • Do you think there are other long-term mechanisms that might be at play here e.g. the documentary causes more animal-focused conversations with friends and family, which might cause behaviour change past the 12-day mark? Do you think a follow-up after 2-3 months (for example) would introduce too much noise to have strong causal evidence?
  • More broadly, what implications do you think this has for the farmed animal movement in terms of funding documentaries vs other interventions, and where do you think more work is needed?
Comment by James Ozden (JamesOz) on Who is protecting animals in the long-term future? · 2022-03-21T17:48:22.715Z · EA · GW

This is a great post and (in my opinion) a super important topic - thanks for writing it up! We (at the Charity Entrepreneurship office) were actually talking about this today and funnily enough, made similar points you listed above why it might not be a problem (e.g. it's too infeasible to colonise space with animals). Generally though we agreed that it could be a big problem and it's not obvious how things are going to play out.

A potentially important thing  we spoke about but isn't mentioned above is how aligned would future artificial general intelligence to the moral value of animals. AGI alignment is probably going to be affected by the moral values of the humans working on AI alignment, and there is a potential concern that a superintelligent AGI might have similar feelings towards animal welfare relative to most of the human population, which is largely indifference at their suffering. This might mean we design superintelligent AGI that is okay with using animals as resources within their calculations, rather than intelligent and emotional beings who have the capacity to suffer. This could, potentially, lead to factory farming scenarios worse than what we have today, as AGI would ruthlessly optimise for production with zero concern for animal welfare, which some farmers would at least consider nowadays. Not only could the moment-to-moment suffering of animals be potentially worse, this could be a stable state that is "locked-in" for long periods of time, depending on the dominance of this AGI and the values that created it.  In essence, we could lock-in centuries (or longer) of intensely bad suffering for animals in some Orwellian scenario where AGI doesn't include animals as morally relevant actors. 

There are obviously some other important factors that will drive the calculations of this AGI if/when designing or implementing food production systems, namely: cost of materials, accessibility, ability to scale, etc. This might mean that animal products are naturally a worse option relative to plant-based or cultivated counterparts but in the cases where it is more efficient to use animal-based products (which will also be improved in efficiency by AGI), the optimisation of this by AGI could be extremely concerning for animal suffering.

Obviously I'm not sure how likely this is to happen, but the outcome seems extremely bad so it's probably worth putting some thought into it, as I'm not sure what is happening currently. It was just a very distressing conclusion to come to that this could happen but I'm glad to see other people are thinking about this (and hopefully more will join!)

Comment by James Ozden (JamesOz) on Divestment From Animal Agriculture: What Does It Achieve? · 2022-03-16T16:50:59.065Z · EA · GW

Thanks for this - has been very interesting to read and glad Animal Ask has been looking into this!

I've got a some pushback on your point that other campaigns within animal advocacy similarly serve the stigmatisation argument (which I think is central to your argument):

However, there are many other animal advocacy campaigns that involve a similar  stigmatisation process such as veganism or reducetarianism, corporate campaigns, and policy change campaigns. Veganism and vegetarianism even work through the similar principle of a boycott, and the symbolism is largely the same. Moreover, since the arguments for the direct effects of these campaigns are much stronger, divestment appears to be a generally weaker campaign option in the animal advocacy context. 

I'm not sure I agree with this. I think the stigmatisation provided by divestment campaigns is quite different to veganism (either as a boycott or outreach), corporate campaigns and policy change campaigns. For one, divestment campaigns are generally extremely targeted at the industry and a) making them look bad and/or b) making other institutions ashamed of working with them. Some reasons I don't think your examples do these elements very well:

  • Vegan or reducetarian outreach doesn't target the animal agriculture industry in the same way divestment does, in that it's focused on changing the minds of individuals.  Whilst vegan outreach might make the industry look bad, I think this effect isn't that big, it's not the main goal and it certainly doesn't make other institutions ashamed of working with them. In the case of vegan outreach, the aim is often some combination of generating concern in one person for the environment, animal welfare and one's health, which (in my opinion) is usually done without any explicit stigmatisation of the animal ag industry. Often, animal welfare concerns aren't even the main reasons given to go vegan, as health and environmental concerns might dominate a lot more, as in recent years.[1]
    • Even when vegan outreach strongly pushes animal welfare concerns for going vegan over health or environmental reasons, I feel like it sounds more like "do you care about animal welfare" rather than "This industry is responsible for the death of billions of sentient beings and destruction of our planet". I think the latter is much more likely to cause industry stigmatisation, yet is rarely ever implemented.
    • Arguably the most important part of divestment work is making other institutions ashamed of collaborating with or withdrawing their ties to the industry in question, which veg*n outreach campaigns don't do at all. This is especially true for big cultural institutions e.g. museums, galleries, universities, etc.
  • On people going vegan themselves (your boycott point): I'm not sure how this would significantly influence cultural perceptions of the animal ag industry as it's an invisible act of omission, in that society is not really monitoring individual diet preferences, whereas big visible acts of divestment are often covered widely in the media. In addition, divestment campaigns themselves, even if unsuccessful (in their stated aims), often garner lots of media coverage in a way that the corporate campaigns etc. fail to (in my opinion). I think this media attention is crucial to highlight the bad practices of the industry and therefore delegitimise it, and a gap that other animal advocacy methods aren't quite filling (also in my opinion).
  • I would argue the same points above are true for corporate campaigns and policy change issues in that:
    • They are not directly optimising  for making the industry look bad, so will leave a lot of value on the table.
    • This seems especially true as in both cases, you essentially need industry support for the corporate campaign / policy to be realised, so you can't attack the industry too directly for fear creating a strong backlash.
    • Whilst corporate and policy change camapaigns probably generate more media coverage than vegan outreach, I think this is less than divestment-esque campaigns. One small data point is that Animal Rebellion's style of campaigning (more similar to the fossil free divestment movement) garnered 800+ media mentions in 2.5 years, and I think no corporate campaigns etc. have been on a similar level of public attention.

The example you give that I definitely agree with is undercover investigations, which do make the industry look bad and occasionally cause some institutions to withdraw their support. I just think this alone isn't enough, and we need more efforts to delegitimise the industry as a whole, but we don't currently have much of this happening.

P.S. I wrote this fairly quickly so might have missed some points and sorry if it's comes across as blunt, that's definitely not intended!

  1. ^

    To check this, I just googled "reasons to go vegan" and the 3 top links were mainly about doing so for health reasons, a mix of health and animal welfare concerns and more industry focused animal welfare stuff (this is more in line with what you're suggesting but I definitely don't think this is the norm, well done THL).

Comment by James Ozden (JamesOz) on EA Projects I'd Like to See · 2022-03-14T10:56:17.572Z · EA · GW

What about a philanthropic version of this, where someone sponsors stories about EA topics to be published and open-accessed on popular newspapers like The Guardian or The Atlantic.

Not sure if you're aware but Open Phil does sponsor a segment of the Guardian focused on farmed animals and has done so since 2017.

Comment by James Ozden (JamesOz) on Update from Open Philanthropy’s Longtermist EA Movement-Building team · 2022-03-11T01:53:04.552Z · EA · GW

Thanks for writing this up, I found the transparency around your perceived mistakes and future uncertainty incredibly refreshing and inspiring!

Comment by James Ozden (JamesOz) on When did EA miss a great opportunity to do good? · 2022-03-10T17:14:59.174Z · EA · GW

I agree with this!  I guess my reasoning behind this post was that if EA is a movement that claims to do (impartial) good, and some other group does something great by our own metrics, how come we missed this? It seems like EA has a big mission of trying to do the most good, so surely we should always be looking for opportunities to do so?

Comment by James Ozden (JamesOz) on When did EA miss a great opportunity to do good? · 2022-03-09T14:22:47.656Z · EA · GW

Good point  - my main rationale behind saying this was the increased number of organisations / roles within EA working on climate in the past few years, for example:

  • Founders Pledge starting in climate work around 2020 and now with a team of 3-4 (roughly)
  • Giving Green being incubated in 2020, now with a team of 6
  • Forethought doing work on climate risk (via John Halstead mainly, I think)
  • FHI now has someone working on climate
  • FTX Climate is now a thing
  • Rethink Priorities recently hired someone to work on climate within their global health and development team
  • Open Phil has introduced climate into their regranting challenge

Around 2018, I think there was comparatively much less activity in the EA climate world so I took this a sign that people must have updated in some way to thinking this was a more important problem to work on. A point that I didn't mention which might be true for Open Phil / Rethink is that growing concern for how climate change will affect global health and development could be a big factor, rather than the extreme tail risk scenarios.

Comment by James Ozden (JamesOz) on Andrew Smith (Liverpool, UK)'s Shortform · 2022-03-09T10:00:02.356Z · EA · GW

Founders Pledge are a pretty good example, with $615 million already given to charity and over $5 billion committed across 1,600+ entrepreneurs

Comment by James Ozden (JamesOz) on Some thoughts on recent Effective Altruism funding announcements · 2022-03-03T22:01:20.400Z · EA · GW

I'm happy to answer your questions, we're working on our introduction post now so it'll be up by the end of next week hopefully. For the record, I didn't strong downvote your comment or "assert" anything but I'm not sure this conversation will be a productive dialogue anymore so I'll send you the document once we've finished it.

Comment by James Ozden (JamesOz) on Some thoughts on recent Effective Altruism funding announcements · 2022-03-03T21:35:46.429Z · EA · GW

Hi Charles, I'm quite confused by this comment (especially the subtext) and messaged you directly to hopefully sort this out.

Comment by James Ozden (JamesOz) on Some thoughts on recent Effective Altruism funding announcements · 2022-03-03T18:25:26.907Z · EA · GW

Yes my bad! This is actually what I meant e.g. the epistemic uncertainty around longtermist interventions makes it challenging to determine funding allocation. Will amend this, thank you!

Comment by James Ozden (JamesOz) on Announcing the Future Fund · 2022-03-02T21:50:37.140Z · EA · GW

Yes, it's not clear how much engagement FTX will have in animal advocacy but they have already made some grants in the space and probably will make more, via FTX Community.

Comment by James Ozden (JamesOz) on The Future Fund’s Project Ideas Competition · 2022-03-01T10:27:09.994Z · EA · GW

International mass movement lobbying against x-risks

Biorisk and Recovery from Catastrophe,  Great Power Relations, Values and Reflective Processes

In recent years, there has been a dramatic growth in grassroots movements concerned about climate change, such as Fridays for Future and Extinction Rebellion. Some evidence implies that these movements might be instrumental in shifting public opinion around a topic, changing dominant narratives, influencing voting behaviour and affecting policymaker beliefs. Yet, there are many more pressing existential risks that receive comparatively little attention, such as nuclear security, unaligned AI, great power conflict, and more. We think an international movement focused on promoting key values, such as concern for future generations, and the importance of reducing existential risk, could have significant spillover effects into public opinion, policy, and the broader development of positive societal values. This could be a massively scalable project, with the potential to develop hubs in over 1000 cities across 100+ countries (approximately the same as Extinction Rebellion Global).


NB: I'm aware this might not be a good idea for biorisk due to infohazards.

Comment by James Ozden (JamesOz) on The Future Fund’s Project Ideas Competition · 2022-03-01T00:34:03.163Z · EA · GW

Refining EA communications and messaging

Values and Reflective Processes, Research That Can Help Us Improve

If we want to motivate a broad spectrum of people about the importance of doing good and ensuring the long-term goes well, it's imperative we find out which messages are "sticky" and which ones are forgotten quickly. Testing various communication frames, particularly for key target audiences like highly talented students, will support EA outreach projects in better tailoring their messaging. Better communications could hugely increase the number of people that consume EA content, relate to the values of the EA movement, and ultimately commit their life to doing good. We'd be excited to see people testing various frames and messaging, across a range of target audiences, using methodologies such as surveys, focus groups, digital media, and more.

Comment by James Ozden (JamesOz) on The Future Fund’s Project Ideas Competition · 2022-02-28T23:59:49.379Z · EA · GW

Building the grantmaker pipeline

Empowering Exceptional People, Effective Altruism

The amount of funding committed to Effective Altruism has grown dramatically in the past few years, with an estimated  $46 billion dollars currently earmarked for EA. With this significant increase in available funding, there is now a greatly increased need for talented and thoughtful grantmakers, who can effectively deploy this money. It's plausible that yearly EA grantmaking could increase by a factor of 5-10x over the coming decade, and this requires finding and training new grantmakers on best practices, as well as developing sound judgement.  We'd love to see projects that build the grantmaker pipeline, whether that's grantmaking fellowships, grantmaker mentoring, more frequent donor lotteries, more EA funds-style organisations with rotating fund managers, and more. 

NB: This might be a refinement of fellowships, but I think it's particularly important.

Comment by James Ozden (JamesOz) on Why aren't EA funders funding the NTI? · 2022-02-28T16:53:44.075Z · EA · GW

I'm curious about your 1-10% as effective than the LTFF figure. Would you say it's that because you think AI safety is roughly 10-100x more pressing (important, neglected, tractable, etc.) than nuclear security,  marginal reasons around NTI vs LTFF giving opportunities, or a fairly even mix of both?

Comment by James Ozden (JamesOz) on We need more nuance regarding funding gaps · 2022-02-15T10:26:55.381Z · EA · GW

Maybe Joey can clear it up but I believe it's the number of funders in that bucket, as an indication of funder diversity.

Comment by James Ozden (JamesOz) on Potential Theories of Change for the Animal Advocacy movement · 2022-02-10T19:30:54.353Z · EA · GW

Hi Saulius, thanks for your kind words! I do agree the longer-term ideas would be good to incorporate and I actually thought I put something about AI timelines in the alternative protein section but seems like I didn't. I definitely do agree something like AI within the next 50 years (which is plausible as the links you reference say) could massively speed up the development of low-cost alternative proteins so that should be a factor pushing it towards being more likely. On other ways that it would change the world to affect farmed animals, as you say, that definitely does seem more complicated so it would be interesting to get the take on someone who works on AI. 

On other considerations around human extinction, global catastrophes and other events that could change the future of humanity in huge ways, I agree it definitely does make it harder to plan and it's not obvious what we should do in these cases. I think those cases probably a) warrant a lot more thought and b) seem much harder to design interventions for that will be robustly good.  As Martin and you talk about below, it seems extremely challenging to predict good solutions for potentially very different futures whereas making the next 50 years go well for animals seems comparatively easier, and I generally believe making the next 50 years go well will be good for the next 500-5,000 years too (although this might not always be true).

I guess to clarify some of your points, is it that medium-term strategy may be unimportant as things could change  very significantly, so we should try find ways to steer these future scenarios in ways that are conducive to good animal welfare (e.g. make sure ALLFED isn't proposing insects etc.)?

Comment by James Ozden (JamesOz) on JamesOz's Shortform · 2022-02-10T18:41:42.598Z · EA · GW

At Social Change Lab, we're conducting some research trying to understand the impacts of protest movements, to inform whether various EA cause areas (e.g. animal advocacy, biosecurity, climate, etc.) should utilise protest as an effective strategy for change. We're doing an informal survey to understand the current uncertainties EAs have around protest and what forms of evidence people find the most compelling, as this will inform our research priorities. 

So I would be very grateful if people would be up for completing this 2-3 minute survey on current attitudes and understanding of protest movements. Thank you!

Comment by James Ozden (JamesOz) on JamesOz's Shortform · 2022-02-06T13:12:48.782Z · EA · GW

I've seen surprisingly little talk about the Open Philanthropy Regranting Challenge here or on other EA discussions forums. In short, they want to give away $150 million to other foundations working on human health, economic development and climate change, to roughly double the grantmaking of other effective foundations. This seems interesting for several reasons:

  • It could be quite high leverage to find/recommend foundations that meet their criteria (e.g. they give over $10million/year)
  • It's the first case of an EA foundation doing this and generally this seems quite rare within the grant-making space. Seems like Open Phil is really embodying their principle of hits-based giving (as well as their commitment to learning/improving).
  • This seems to be the biggest / first major foray that Open Phil is making into climate change to my knowledge and I'm wondering what spurred this. Seems to be coming more from a global development standpoint based on the other focus areas, as opposed to an existential risk angle. Could have been influenced by the other major donor (see below).
  • It's the first time Open Phil has mentioned major donors besides Cari Tuna and Dustin Moskovitz, by saying that Lucinda Southworth was contributed to this too. Makes me think how many major donors of this size Open Phil is working with, and if it's now part of their strategy to find more billionaire-sized donors.

What do other people think of this? Any particular foundations that people would want Open Phil to consider strongly for this?

Comment by James Ozden (JamesOz) on JamesOz's Shortform · 2022-01-21T12:01:09.024Z · EA · GW

There was quite an interesting survey commissioned by YouGov in the UK on reasons for veganism/vegetarianism, as well as some questions around alternative proteins and eating insects. 

Key points:

  • Concern for animals seems to be the dominating reason for people going vegan and veggie, although environmental concerns are also high.
  • These reasons become broader after going vegan e.g. people develop a wider range of reasons for staying vegan compared to the original reason they went vegan (concern for the environment seems to rise the most).
  • Surprisingly, 23% of vegans purchase new fur products. I'm not really sure what to make of this as this is literally against the standard definition of veganism.
  • 35% of vegans and 42% of vegetarians think it's unacceptable for vegans and veggies to lab-grown meat. This seems really high and I'm not sure why people feel this way
  • 5% of vegans think its okay for vegans to eat insects, which seems much lower than the fur question but still a bit odd imo.
Comment by James Ozden (JamesOz) on Rowing and Steering the Effective Altruism Movement · 2022-01-09T20:43:11.797Z · EA · GW

Formatting point - your link for  'the long reflection' seems to be broken here:

Again, I wish to recognise that many community leaders strongly support steering – e.g., by promoting ideas like ‘moral uncertainty’ and ‘the long reflection’ or via specific community-building activities.

Comment by James Ozden (JamesOz) on JamesOz's Shortform · 2022-01-07T15:58:35.782Z · EA · GW

We, Effective Environmentalism, are organising  more upcoming talks from those tackling climate change using an EA or EA-adjacent approach. We've got three quite exciting talks (one rescheduled from the last round) lined up over the next three months so if anyone is interested in learning more, do sign up below. You can also see previous talks on our YouTube Channel and sign up to our newsletter (+ see other ways to get involved) here.

Sunday, January 23rd, 6-7pm GMT - Good news on climate change + what is a worst case scenario? By Dr John Halstead from Forethought Foundation. Sign up here

In this talk, John will firstly discuss some good news on climate change: on current policy, emissions look set to be lower than once feared, as is the risk of very high climate sensitivity. Secondly, John will discuss a worst-case scenario in which we burn all of the fossil fuels: how many fossil fuels are there, how likely we are to burn them, how we might do so if we did, the warming that would produce, and what that might mean for life on Earth.

Saturday, February 5th, 6:30-7:30pm GMT - The role of carbon removal in achieving climate goals - by Noah Deich, President and co-founder of Carbon180. Sign up here.

During the presentation, Noah Deich, President and co-founder of Carbon180, will talk about the role for carbon removal in achieving our climate goals, what solutions hold the most promise, and how civil society can influence the necessary policy changes for bringing carbon removal to scale in a beneficial way.

Sunday, March 13th, 7-8pm GMT - Electricity production & use in decarbonisation scenarios - by Matthew Dahlhausen from the National Renewable Energy Laboratory. Sign up here.

This presentation will go over basic and intermediate energy literacy, covering the electric grid, building energy services, and challenges in full decarbonisation scenarios. It will address common misconceptions around energy and electricity consumption, as well as barriers to full decarbonisation.

 

We're always looking for new speakers so if you might be interested or have any suggestions for potentially interesting speakers, please comment below and let me know!

Comment by James Ozden (JamesOz) on Democratising Risk - or how EA deals with critics · 2021-12-29T17:32:34.220Z · EA · GW

For what it's worth, I wasn't genuinely saying we should hold a citizen's assembly to decide what we do with all of Open Phil's money, I just thought it was an interesting thought experiment. I'm not sure I agree that the pre-setting of the aims of an assembly is undemocratic, however, as surely all citizen's assemblies need an initial question to start from? That seems to have been the case for previous assemblies (climate, abortion, etc.).

To play devil's advocate, I'm not sure your points about the average global citizen being homophobic, religious, socialist, etc., actually matter that much when it comes to people deciding where they should allocate funding for existential risk. I can't see any relationship between beliefs in which existential risks are the most severe and queer people, religion or their willingness to pay carbon taxes (assuming the pot of funding they allocate is fixed and doesn't affect their taxes). 

Also, I don't think you've given much convincing evidence that a citizen's assemblies would lead to funding for key issues falling a fair amount vs decisions by OP program officers, besides your intuition. I can't say I have much evidence myself except for the studies (1, 2, 3 to a degree) provided in the report, would suggest the exact opposite, in that a diverse group of actors performs better than an higher-ability solo actor. In addition, if we base the success of the citizen's assembly on how well they match our current decisions (e.g. the same amount of biorisk, nuclear and AI funding), I think we're missing the point a bit. This assumes we've got it all perfectly allocated currently which I think is a central challenge of the paper above, in that it's probably allocated perfectly according to a select few people but this by no means leads to it actually being true.

Comment by James Ozden (JamesOz) on Democratising Risk - or how EA deals with critics · 2021-12-29T00:18:49.560Z · EA · GW

I think your Open Phil example could be an interesting experiment. Do you think that if Open Phil commissions a citizen's assembly to allocate their existential risk spending  and the input  is given by their researchers / program officers, it would be wildly different to what they would do themselves?

In any scenario, I think it would be quite interesting as surely if our worldviews and reasoning are strong enough to claim big unusual things (e.g. strong longtermism) we should be able to convince a random group of people that they hold? and if not, is that a problem with the people selected, our communication skills or the thinking itself? I personally don't think it would be a problem with the people (see past successes of citizen's assemblies)* so shouldn't we be testing our theories to see if they make sense under different worldviews and demographic backgrounds? and if they don't seem robust to other people, we should probably try integrate the reasons why (within reason of course).

*there's probably some arguments to be made here that we don't necessarily expect the allocation from this representative group, even when informed  perfectly by experts, to be the optimal allocation of resources so we're not maximising utility / doing the most good. This is probably true but I guess the balance of this with moral uncertainty is the trade-off we have to live with? Quite unsure on this though, seems fuzzy

Comment by James Ozden (JamesOz) on Democratising Risk - or how EA deals with critics · 2021-12-28T19:58:08.708Z · EA · GW

You say that decisions about which risks to take should be made democratically. The implication of this seems to be that everyone, and not just EAs, who is aiming to do good with their resources should donate only to their own government. Their govt could then decide how to spend the money democratically.

I'm not fully sure that deciding which risks to take seriously in a democratic fashion logically leads to donating all of your money to the government. Some reasons I think this:

  • That implies that we all think our governments are well-functioning democracies but I (amongst many others) don't believe that to be true. I think it's fairly common sentiment and knowledge that political myopia by politicians, vested interests and other influences mean that governments don't implement policies that are best for their populations.
  • As I mentioned in another comment, I think the authors are saying that as existential risks affect the entirety of humanity in a unique way, this is one particular area where we should be deciding things more democratically. This isn't necessarily the case for spending on education, healthcare, animal welfare, etc, so there it would make sense you donate to institutions that you believe are more effective and the bar for democratic input is lower. The quote from the paper that makes me think this is:

Tying the study of a topic that fundamentally affects the whole of humanity to a niche belief system championed mainly by an unrepresentative, powerful minority of the world is undemocratic and philosophically tenuous.

  • Thirdly,  I think this point is weaker but most political parties aren't elected by the majority of the population in the country. One cherry picked example is that only 45% of UK voters voted for the Conservative party and we only had a 67% election turnout, meaning that most of the country didn't actually vote for the winning party. It then seems odd that if you think the outcome would have been different given a higher voter turnout (closer to "true democracy"), you would give all your donations to the winning party.

Note - I don't necessarily agree with the premise we should prioritise risks democratically but I also don't think what you've said re donating all of our money to the government is the logical conclusion from that statement.