Insects Raised for Food and Feed — Global Scale, Practices, and Policy 2020-06-29T13:57:31.653Z · score: 61 (18 votes)
Notes on how a recession might impact giving and EA 2020-03-13T18:17:24.865Z · score: 40 (22 votes)
Global Cochineal Production: Scale, Welfare Concerns, and Potential Interventions 2020-02-11T21:33:20.225Z · score: 28 (14 votes)
Should Longtermists Mostly Think About Animals? 2020-02-03T14:40:23.242Z · score: 61 (37 votes)
Uncertainty and Wild Animal Welfare 2019-07-19T13:33:51.533Z · score: 24 (16 votes)
A Research Agenda for Establishing Welfare Biology 2019-03-15T18:24:51.099Z · score: 22 (11 votes)
Announcing Wild Animal Initiative 2019-01-25T17:23:30.758Z · score: 28 (14 votes)


Comment by abrahamrowe on X-risks to all life v. to humans · 2020-06-03T20:41:48.453Z · score: 2 (2 votes) · EA · GW

Nope - fixed. Thanks for pointing that out.

Comment by abrahamrowe on X-risks to all life v. to humans · 2020-06-03T20:01:30.221Z · score: 4 (4 votes) · EA · GW

Thanks for sharing this!

I happen to have made a not-very-good model a month or so ago to try to get a sense of how much the possibility of future species that care about x-risks impacts x-risk today. It's here, and it has a bunch of issues (like assuming that it will take the same amount of time from now for a new species that it took for humans to evolve since the first neuron, assuming that all of Ord's x-risks don't reduce the possibility of future moral agents evolving etc.), and possibly doesn't even get at the important things mentioned in this post.

But based on the relatively bad assumptions in it, it spat out that if we generally expect moral agents to evolve who reach Ord's 16% 100 year x-risk every 500 million years or so (assuming an existential event happens), and that most the value of the future is beyond the next 0.8 to 1.2B years, then we ought to adjust Ord's figure down to 9.8% to 12%.

I don't think either the figure / approach in that should be taken at all seriously though, as I spent only a couple minutes on it and didn't think at all about better ways to try to do this - just writing this explanation of it has shown me a lot of ways in which it is bad. It just seemed relevant to this post and I wasn't going to do anything else with it :).

Comment by abrahamrowe on Wild Animal Welfare Meetup (Spring 2020) · 2020-04-26T17:31:20.261Z · score: 1 (3 votes) · EA · GW

Yeah, it's interesting to see that across the board. My sense is that wild animal welfare work (and farmed animal work), are very much funding constrained. Relevant to this - Open Philanthropy doesn't currently fund EA wild animal welfare work.

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-04-14T13:37:48.885Z · score: 1 (1 votes) · EA · GW

Thanks for this. I think for me the major lessons from comments / conversations here is that many longtermists have much stronger beliefs in the possibility of future digital minds than I thought, and I definitely see how that belief could lead one to think that future digital minds are of overwhelming importance. However, I do think that for utilitarian longtermists, animal considerations might dominate in possible futures where digital minds don't happen or spread massively, so to some extent one's credence in my argument / concern for future animals ought to be defined by how much you believe in or disbelieve in the possibility and importance of future digital minds.

As someone who is not particularly familiar with longtermist literature, outside a pretty light review done for this piece, and a general sense of this topic from having spent time in the EA community, I'd say I did not really have the impression that the longtermist community was concerned with future digital minds (outside EA Foundation, etc). Though that just may have been bad luck.

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-04-14T13:26:54.410Z · score: 2 (2 votes) · EA · GW

Ah - you're totally right - that was an oversight. I'm working on a followup to this piece focusing more on what animal focused longtermism looks like, and talk about moral circle expansion, so I don't know how I dropped it here :).

Comment by abrahamrowe on Why I'm Not Vegan · 2020-04-10T19:28:58.265Z · score: 9 (8 votes) · EA · GW

I appreciate your thoughtful response to my post, and think I unintentionally came across harshly. I think you and I likely disagree on how much to weight the moral worth of animals, and what that entails about what we ought to do. But my discomfort with this post is (I hope, though of course I have subconscious biases) is specifically with the non-clarified statements about comparative moral worth between humans and other species. I made my comment to clarify that the reason I voted this down is that I think it is a very bad community standard to blanket accept statements of the sort "I think that these folk X are worth less than these other folk Y" (not a direct quote from you obviously) without stating precisely why one believes that or justifying that claim. That genuinely feels like a dangerous precedent to have, and without context, ought to be viewed with a lot of skepticism. Likewise, if I made an argument where I assumed but did not defend the claim that people different than me are worth 1/10th people like me, you likely ought to downvote it, regardless of the value of the model I might be presenting for thinking about an issue.

One small side note - I feel confused about why the surveys of how the general public view animals are being cited as evidence in favor of casual estimations of animals' moral worth in these discussions. Most members of the public, myself included, aren't experts in either moral philosophy nor animal sentience. And, we also know that most members of the public don't view veganism as worthwhile to do. Using this data as evidence that animals have less moral worth strikes me as doing something analogous to saying "most people who care more about their families than others, when surveyed, seem to believe that people outside their families are worth less morally. On those grounds, I ought to think that people outside my family are worth less morally". This kind of survey provides information on what people think about animals, but in no way is evidence of the moral status of animals. But, this might be the moral realist in me, and/or an inclination toward believing that moral value is something individuals have, and not something assigned to them by others :).

Comment by abrahamrowe on Why I'm Not Vegan · 2020-04-10T13:17:12.945Z · score: 7 (4 votes) · EA · GW

While you're right that the Cambridge Declaration on Consciousness was signed by few people, they were mostly very prominent and influential researchers, which was the point of the thing. But yeah, it is weak evidence on its own, I agree.

I don't know of specific survey data, but based on both the declaration and its continued influence, and the wide variety of opinions, literature reviews, etc supporting the position, my impression is that there is somewhat of a consensus, though there are occasional outliers. I believe my "to some extent, consensus" accurately captures the state of the field. Though in either case it is beside the point since Jeff assumed them to be sentient for the post. Thanks for sharing! :)

Comment by abrahamrowe on Why I'm Not Vegan · 2020-04-09T18:18:44.190Z · score: 2 (2 votes) · EA · GW

I agree that I was assuming a certain moral framework in my post - I've updated it to refer explicitly to utilitarianism of some kind, since that's a fairly common view in EA.

Thanks for the moral trade idea!

Comment by abrahamrowe on Why I'm Not Vegan · 2020-04-09T16:46:49.880Z · score: 18 (12 votes) · EA · GW

Yeah, that's fair - I was not charitable in my original comment RE whether or not there is a rationale behind those estimates, when perhaps I ought to assume there is one. But I guess part of my point is that because this argument entirely hinges on a rationale, not providing it just makes this seem very sketchy.

While I don't think human experiences and animal experiences are comparable in this direct a way, as an illustration imagine me making a post that said, "I think humans in other countries are worth 1/10 of those in my own country, therefore it seems like more of a priority to help those in my own country", and providing no reasoning or clarification for that discount. You would be justified in being very skeptical of the argument I was making, and to view my argument as low quality, even though there might be a variety of other good reasons to prioritize helping those in my own country. I don't think that kind of statement is high enough quality on its own to be entertained or to support an argument. But at its core, that's the argument in this post. I'd be interested in talking about the reasons behind those discounts, but without them, there just isn't even a way to engage with this argument that I think is productive.

For the record, I generally don't think it is a major wrong to not be vegan, and wouldn't downvote / be this critical of someone voicing something along the lines of "I really like how meat tastes, so am not vegan," etc. I am more critical here because it is an attempt to make a moral justification of not eating a vegan diet, and I think that argument not only fails, but also doesn't attempt to defend or explain core premises and assumptions, especially when aspects of those premises seem contrary to some degree of scientific evidence / consensus, which strike me to broadly be taken seriously as part of the community norms.

That being said, I think it's fully possible there are good justifications for having such large discounts on the moral worth of animals, and those discounts are worth discussing. But that was glossed over here, which is why I am responding more critically.

Comment by abrahamrowe on Why I'm Not Vegan · 2020-04-09T14:40:05.481Z · score: 46 (39 votes) · EA · GW

I downvoted this, and would feel strange not talking about why:

I think there are lots of good reasons, moral or otherwise, to not be vegan - maybe you can't afford vegan food, or otherwise cannot access it. Maybe you've never heard of veganism. Maybe there are good reasons to think that the animal products you're eating aren't causing additional harm. Maybe you just like animal products a lot, and want to eat some, even though you know it is bad.

But I don't think this argument is a particularly good one, and doesn't engage with questions of animal ethics well:

1. "I think there's a very large chance they don't matter at all, and that there's just no one inside to suffer" - this strikes me (for birds and mammals at least) as a statement in direct conflict with a large body of scientific evidence, and to some extent, consensus views among neuroscientists (e.g. the Cambridge Declaration on Consciousness Though to be fair, you are assuming they do feel pain in this post.

2. Your weights for animals lives seem fairly arbitrary. I agree that if those were good weights to use, maybe the moral trade-offs would be justified, but if you're just saying, with little basis, that a pig has 1/100 human moral worth, I don't know how to evaluate it. It isn't an argument. It's just an arbitrary discount to make your actions feel justified from a utilitarian standpoint.

I also think these moral worth statements need more clarification - do you mean that while I (a human) feel things on the scale of -1000 to 1000, a pig only feels things on the scale of -10 to 10? Or do you mean a pig is somehow worth less intrinsically, even though it feels similar amounts of pain as me? The first statement I am skeptical of because of a lack of evidence for it, and the second seems just unjustifiably biased against pigs for no particular reason.

I generally think factory farms are pretty bad, and maybe as bad as torture. Removing cows from the equation, eating animal products requires 6.125 beings to be tortured per year per American (by the numbers you shared). I personally don't think that is a worthwhile thing to cause. Randomly assigning small moral weights to those animals to feel justified seems unscientific and odd.

I think it seems fairly clear that there is a strong case to be made, if you're someone who has the means and access to vegan food and are a utilitarian of various sorts, to eat at least a mostly vegan diet. No one has to be perfectly moral all the time, and I think it's probably okay (on average) to often not be perfectly moral. But presenting arbitrarily assigned discounts on lives until your actions are morally justified is a weak justification.

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-03-31T18:18:09.914Z · score: 4 (3 votes) · EA · GW

Thanks for linking!

Yeah, that's interesting. Clearly there is major decline in some populations right now, especially large vertebrates and birds. I guess the relevant questions are: will those last a long time (at least a few hundred years), and: is there complementary growth in other populations (invertebrates)? Especially if species that are succeeding are smaller on average than the ones declining, as you might expect there to be even more animals then. Cephalopod populations, for example, have increased since the 50s:

Comment by abrahamrowe on Estimates of global captive vertebrate numbers · 2020-02-18T18:55:37.426Z · score: 8 (5 votes) · EA · GW

This really awesome and helpful! Thanks Saulius!

One group that is probably pretty small but isn't listed here - animals in wildlife rehabilitation clinics: this page says 8k to 9k animals (I'm guessing mostly vertebrates?) enter clinics in Minnesota every year. If that scales by land area for the contiguous United States, that would be 270k - 305k animals per year in the US, so maybe a few million globally? But that's just a guess from the first good source I saw.

On pet shelters - I used to work at one, and every month, we reported our current animal population (along with a lot of other stats), to this organization - - I think their data could probably be used to get a very accurate estimate of animals currently in shelters in the US.

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-02-10T01:04:47.677Z · score: 2 (2 votes) · EA · GW

Yeah I think that is right that it is a conservative scenario - my point was more, the proposed future scenarios don't come close to imagining as much welfare / mind-stuff as might exist right now.

Hmm, I think my point might be something slightly different - more to pose a challenge to explore how taking animal welfare seriously might change the outcomes of conclusions about the long term future. Right now, there seems to be almost no consideration. I guess I think it is likely that many longtermists thinks animals matter morally already (given the popularity of such a view in EA). But I take your point that for general longtermist outreach, this might be a less appealing discussion topic.

Thanks for the thoughts Brian!

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-02-10T00:52:36.926Z · score: 6 (4 votes) · EA · GW

Yeah, the idea of looking into longtermism for nonutilitarians is interesting to me. Thanks for the suggestion!

I think regardless, this helped clarify a lot of things for me about particular beliefs longtermists might hold (to various degrees). Thanks!

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-02-10T00:50:07.324Z · score: 2 (2 votes) · EA · GW

That makes sense!

Comment by abrahamrowe on EA Animal Welfare Fund is looking for applications until the 6th of February · 2020-02-06T21:05:59.542Z · score: 1 (1 votes) · EA · GW


Comment by abrahamrowe on EA Animal Welfare Fund is looking for applications until the 6th of February · 2020-02-05T22:21:23.035Z · score: 2 (2 votes) · EA · GW

Hey Karolina,

Is the deadline at a specific time on February 6th, or before the 6th (i.e. EOD the 5th)? The wording is just slightly vague.

Thanks for all you do!

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-02-05T15:45:42.309Z · score: 1 (1 votes) · EA · GW

Thanks for the feedback - that's a good rule of thumb!

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-02-05T15:43:40.974Z · score: 4 (3 votes) · EA · GW

Thanks for laying out this response! It was really interesting, and I think probably a good reason to not take animals as seriously as I suggest you ought to, if you hold these beliefs.

I think something interesting is that this, and the other objections that have been presented to my piece have brought out is that to avoid focusing exclusively on animals in longtermist projects, you have to have some level of faith in these science-fiction scenarios happening. I don't necessarily think that is a bad thing, but it isn't something that's been made explicit in past discussions of long-termism (at least, in the academic literature), and perhaps ought to be explicit?

A few comments on your two arguments:

Claim: Our descendants may wish to optimize for positive moral goods.
I think this is a precondition for EAs and do-gooders in general "winning", so I almost treat the possibility of this as a tautology.

This isn't usually assumed in the longtermist literature. It seems more like the argument is made on the basis of future human lives being net-positive, and therefore good that there will be many of them. I think the expected value of your argument (A) hinges on this claim, so it seems like accepting it as a tautology, or something similar, is actually really risky. If you think this is basically 100% likely to be true, of course your conclusion might be true. But if you don't, it seems plausible that that, like you mention, priority possibly ought to be on s-risks.

In general, a way to summarize this argument, and others given here could be something like, "there is a non-zero chance that we can make loads and loads of digital welfare in the future (more than exists now), so we should focus on reducing existential risk in order to ensure that future can happen". This raises a question - when will that claim not be true / the argument you're making not be relevant? It seems plausible that this kind of argument is a justification to work on existential risk reduction until basically the end of the universe (unless we somehow solve it with 100% certainty, etc.), because we might always assume future people will be better at producing welfare than us.

I assume people have discussed the above, and I'm not well read in the area, but it strikes me as odd that the primary justification in these sci-fi scenarios for working on the future is just a claim that can always be made, instead of working directly on making lives with good welfare (but maybe this is a consideration with longtermism in general, and not just this argument).

I guess part of the issue here is you could have an incredibly tiny credence in a very specific number of things being true (the present being at the hinge of history, various things about future sci-fi scenarios), and having those credences would always justify deferral of action.

I'm not totally sure what to make of this, but I do think it gives me pause. But, I admit I haven't really thought about any of the above much, and don't read in this area at all.

Thanks again for the response!

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-02-04T15:16:42.669Z · score: 2 (2 votes) · EA · GW

Yeah, I think it probably depends on your specific credence that artificial minds will dominate in the future. I assume that most people don't place a value of 100% on that (especially if they think x-risks are possible prior to the invention of self-replicating digital minds, because necessarily that decreases your credence that artificial minds will dominate). I think if your credence in this claim is relatively low, which seems reasonable, it is really unclear to me that the expected value of working on human-focused x-risks is higher than that of working on animal-focused ones. There hasn't been any attempt that I know of to compare the two, so I can't say this with confidence though. But it is clear that saying "there might be tons of digital minds" isn't a strong enough claim on its own, without specific credences in specific numbers of digital minds.

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-02-04T14:54:53.417Z · score: 3 (3 votes) · EA · GW

That's a good point!

I think something to note is that while I think animal welfare over the long term is important, I didn't really spend much time thinking about possible implications of this conclusion in this piece, as I was mostly focused on the justification. I think that a lot of value could be added if some research went into these kinds of considerations, or alternative implications of a longtermist view of animal welfare.


Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-02-04T14:51:39.706Z · score: 2 (2 votes) · EA · GW


Yes, this was noted in the sentence following you quote and the paragraphs after this one. Note that if humans implemented extremely resilient interventions, human-focused x-risks might be of less value, but I broadly agree humanity's moral personhood is a good reason to think that x-risks impacting humans are valuable to work on. Reading through my conclusions again, I could have been a bit more clear on this.

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-02-04T04:58:47.633Z · score: 4 (4 votes) · EA · GW

Ah - I meant human, emulated or organic, since Rob referred to emulated humans in his comment. For less morally weighty digital minds, the same questions RE emulating animal minds apply, though the terms ought to be changed.

Also it seems worth noting that much the literature on longtermism, outside Foundation Research Institute, isn’t making claims explicitly about digital minds as the primary holders of future welfare, but just focuses on the future organic human populations (Greaves and MacAskill’s paper, for example), and similar sized populations to the present day human population at that.

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-02-04T03:03:51.731Z · score: 2 (2 votes) · EA · GW


Admittedly, I haven't thought about this extensively. I think that there are a variety of x-risks that might cause humans to go extinct but not animals, such as specific bio-risks, etc. And, there are x-risks that might threaten both humans and animals (a big enough asteroid?), which would fall into the group I describe. One might be just continued human development massively decreasing animal populations, if animals have net positive lives, though I think those might be unlikely.

I haven't given enough thought to the second question, but I'd guess if you thought most the value of the future was in animal lives, and not human lives, it should change something? Especially given how focused on only preserving human welfare the long-termist community has been.

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-02-04T02:58:11.297Z · score: 14 (8 votes) · EA · GW

Hey Rob!

I'm not sure that even under the scenario you describe animal welfare doesn't end up dominating human welfare, except under a very specific set of assumptions. In particular, you describe ways for human-esque minds to explode in number (propagating through space as machines or as emulations). Without appropriate efforts to change the way humans perceive animal welfare (wild animal welfare in particular), it seems very possible that 1) humans/machine descendants might manufacture/emulate animal-minds (and since wild animal welfare hasn't been addressed, emulate their suffering), 2) animals will continue to exist and suffer on our own planet for millennia, or 3) taking an idea from Luke Hecht, there may be vastly more wild "animals" suffering already off-Earth - if we think there are human-esque alien minds, than there are probably vastly more alien wild animals. The emulated minds that descend from humans may have to address cosmic wild animal suffering.

All three of these situations mean that even when the total expected welfare of the human population is incredibly large, the total expected welfare (or potential welfare) of animals may also be incredibly large, and it isn’t easy to see in advance that one would clearly outweigh the other (unless animal life (biological and synthetic) is eradicated relatively early in the timeline compared to the propagation of human life, which is an additional assumption).

Regardless, if all situations where humans are bound to the solar system and many where they leave result in animal welfare dominating, then your credence that animal welfare will continue to dominate should necessarily be higher than your credence that humans will leave the solar system. So neglecting animal welfare on the grounds that humans will dominate via space exploration seems to require further information about the relative probabilities of the various situations, multiplied by the relative populations in these situations.

I haven’t attempted any particular expected value calculation, but it doesn’t seem to me like you can conclude immediately that simply because human welfare has the potential to be infinite or extravagantly large, the potential value of working on human welfare is definitely higher. The latter claim instead requires the additional assertion that animal welfare will not also be incredibly or infinitely large, which as I describe above requires further evidence. And, you would also have to account for the fact that wild animal welfare seems vastly more important currently and will be for the near future in that expected value calculation (which I take from your objection being focused on the future, you might already believe?).

If this is your primary objection, at best it seems like it ought to marginally lower your credence that animal welfare will continue to dominate. It strikes me as an extremely narrow possibility among many many possible worlds where animals continue to dominate welfare considerations, and therefore in expectation, we still should think animal welfare will dominate into the future. I’d be interested in what your specific credence is that the situation you outlined will happen?

Comment by abrahamrowe on Optimal population density: trading off the quality and quantity of welfare · 2020-01-23T16:34:31.239Z · score: 2 (2 votes) · EA · GW

This is really amazing, and it'll be interesting to see it applied to wild animal welfare work in the future. I also imagine that there are a lot of applications for farmed animal welfare improvements, etc. Thanks for sharing!

Comment by abrahamrowe on We're Rethink Priorities. AMA. · 2019-12-16T17:58:54.173Z · score: 5 (5 votes) · EA · GW

Thanks for the response! I guess I personally am interested in it, because I think it would lend credibility to WAW outreach projects to be able to cite it.

Comment by abrahamrowe on We're Rethink Priorities. AMA. · 2019-12-16T17:57:05.260Z · score: 2 (2 votes) · EA · GW

That's great to hear! I guess I think it would be great for norms of caring about invertebrates to be spread in the animal advocacy space, so that seems good.

Comment by abrahamrowe on We're Rethink Priorities. AMA. · 2019-12-16T17:56:11.972Z · score: 1 (1 votes) · EA · GW

I don't actually know if engagement, is important (maybe it is an indicator of either your thoroughness, as there are few followups, or just that you all are the experts, so most people on the forum aren't going to weigh in). Sharing with funders makes a lot of sense. Thanks!

Comment by abrahamrowe on We're Rethink Priorities. AMA. · 2019-12-16T17:53:56.401Z · score: 5 (3 votes) · EA · GW

I guess my inclination toward in-house teams would be that an organization would be more likely to respond / change direction on the basis of findings from in-house teams. But I'm unsure that there is much evidence that organizations have changed directions from research done by anyone, except perhaps in small ways. I also imagine being in-house would reduce barriers for data collection, etc., because there wouldn't be NDAs or privacy concerns that might govern inter-org interactions. I think you and I had previously had this issue, where I had done research that might have been relevant to your work, and couldn't share it due to an NDA.

Comment by abrahamrowe on Interaction Effect · 2019-12-16T16:57:20.111Z · score: 11 (7 votes) · EA · GW

I'm not particularly EA, but I think they gist of the argument is - you should work where can you make the most marginal impact, not necessarily in a job that is the highest impact overall. So if you're choosing a career for impact, you might be one of only a few thousand people thinking about things in EA terms. If you want to have a large impact, then you ought to look at things that are large in scope and neglected, etc.

If somehow the EA community coordinated all resources, or was much much larger in size, the recommended careers would probably be different. In that case, obviously some people would need to be teachers, farmers, etc., and it would be important to encourage people capable of doing those things well to pursue those careers. But, given that there are relatively few people willing to change their careers for this sort of impact right now, the careers recommendations that are made in fields where a few people might have a larger impact.

This isn't a denial of interdependence. It's more of an implicit acknowledgement of the limits of the current size of the community.

Another factor is that many careers that EA careers are dependent upon are likely to be filled regardless. There are people who would like to be, or whose circumstances cause them to be, teachers, construction workers, farmers, truck drivers, etc. So while all those jobs probably have a (positive) impact, it's less urgent for someone who wants to have the greatest impact to pursue that as a career. While education might be important, I know that if I don't apply for a job at my local high school, another (even more) capable teacher probably will. Instead, on the margin, an EA might have a greater impact by pursuing something more neglected, or pursuing a career where they can earn money to donate to a charity that can hire more people for a neglected cause, etc.

The core idea is that because there is only a small community of people interested in having the greatest impact they can, then they should pursue careers that on the margin would be most likely to have the greatest impact. It doesn't necessarily mean that these careers are intrinsically or functionally "better" or higher ranked than others. They are prioritized by EA because few people are in EA, and fewer people are thinking about pursuing recommended careers.

Comment by abrahamrowe on We're Rethink Priorities. AMA. · 2019-12-12T20:07:24.909Z · score: 13 (7 votes) · EA · GW

I'd be interested in what organizations you're comparing against? I wonder if it is more that animal advocacy research is funding constrained compared to global poverty or x-risks, and that ends up negatively impacting groups that do research on animal advocacy and other topics.

Comment by abrahamrowe on We're Rethink Priorities. AMA. · 2019-12-12T18:59:01.766Z · score: 14 (11 votes) · EA · GW

How do you engage with the animal welfare advocacy groups who might act on your research? Or alternatively, how do you counteract any negatives from not being an advocacy organization, and not getting feedback directly (e.g. advocacy that responds to research because they are done in conjunction)?

When I worked in animal advocacy, my sense was that the research that EA research groups like ACE were doing was either irrelevant or badly ill-informed / inaccurate, primarily because the researchers didn't actually have much experience in the space. Or, it came only after the advocacy groups had already basically realized the same things, and shifted priorities. I don't think this has really been relevant for the work you've done so far, since it hasn't been particularly proscriptve on particular strategies, but it seems like a greater risk as you do more farmed animal research. I've always been disappointed that the in-house research teams at animal groups are small, since they seem better positioned to do some of this work (though there are probably downsides to that too).

Edit for clarification: As an example, a lot of studies were done on pro-vegan leaflets. Many studies seemed to be badly designed, etc, so that was too bad. But organizations did leafleting for a while, realized there were more effective uses of resources, and then stopped leafleting (generally - obviously some still happens, especially to cultivate volunteers). It was only after this that evidence that leafleting was not very effective emerged in the research literature. While I'm glad that a post-mortem happened, it really didn't make a difference in charity behavior, since charities had changed direction already for the most part.

The question is really just motivated by a thought experiment - if I could, instead of having all the money that's been spent on EA animal advocacy research historically, have that money go to direct advocacy (maybe corporate campaigns, for example), would I? And for me the answer is almost certainly yes, with maybe one or two exceptions.

Relatedly, on wild animal welfare, I feel very confident that if we could eliminate basically all research that happened before ~3 months ago in exchange for the information we have now about how to approach academic field building, it be worthwhile (recognizing that a big chunk of that research is stuff I spent time on).

So both these suggest to me that I should generally have a prior favoring direct advocacy (or at least, really promising direct advocacy) over EA research moving forward, as much as that goes against my own inclinations or desires (I like research more). Or at least, a positive case has to be made for research. And, it suggests to me, given that almost all this research has been done by groups not doing advocacy (with exceptions), that research should primarily be done by groups doing advocacy. Though as a note, obviously a lot of academic field building advocacy on wild animal welfare issues can be done by publishing research within the conservation space, etc

Comment by abrahamrowe on We're Rethink Priorities. AMA. · 2019-12-12T18:51:52.553Z · score: 17 (13 votes) · EA · GW

Given that some of your staff have academic backgrounds, do you all have plans to refine and pursue peer-reviewed publication for your invertebrate welfare related work (though I don't know if it would be well received)? It seems like there could be a lot of value in the pieces being seen by academic audiences, at least from a wild animal welfare academic field building perspective. If not, why not?

Comment by abrahamrowe on We're Rethink Priorities. AMA. · 2019-12-12T18:51:34.652Z · score: 13 (8 votes) · EA · GW

Do you have any sense of whether or not the invertebrate welfare pieces have had an impact on organizations, decision makers, etc? It seems like it would be reasonable to expect them to translate to more donations for groups working on invertebrate issues, since I'd guess the evidence for valenced experience was stronger than many would have expected. Though in the EA space that would basically mean donating to you all or Wild Animal Initiative. Those pieces were great, and I hope they lead to more interest in invertebrates in the animal welfare community.

Comment by abrahamrowe on We're Rethink Priorities. AMA. · 2019-12-12T18:51:15.070Z · score: 14 (10 votes) · EA · GW

What's your theory of change / plan for getting eyes on your work in general? I've been really enjoying reading your pieces, but it seems like most get voted on a lot, but don't generate much discussion unfortunately (at least on the animal ones). I’d be really interested in seeing discussion about your work / hearing and reading feedback on it so I have more context on it.

Comment by abrahamrowe on Opinion: Estimating Invertebrate Sentience · 2019-11-09T02:45:59.254Z · score: 1 (1 votes) · EA · GW

Thanks Jason!

That makes sense - I understood that you all were expressing credences. I think my comment wasn't written very clearly. I'm interested in what process you all took to reach these credences, and what you think the appropriate use of them would be. Would these numbers be the numbers you'd use in a cost-effectiveness analysis, etc.? Or a starting point to decide how to weigh further evidence, etc? I know credences are a bit fuzzy as a general concept, but I guess I'd love thoughts on the appropriate use of these numbers (outside your response that we shouldn't use them or should only use them very carefully).

Comment by abrahamrowe on Would a reduction in the number of owned cats outdoors in Canada and the US increase animal welfare? · 2019-11-08T21:02:14.615Z · score: 2 (2 votes) · EA · GW

Thanks for the detailed response. I think I disagree in a sort of principled way with particular kinds of approaches to downstream effects, in part because I think it could just turn into an endless game of trying to figure out how things could turn out poorly, as opposed to a model where we address both rodenticides and cat predation (though I recognize I am stubbornly resisting you all trying to do prioritization, which might not be a good idea given the name of your organization).

Regardless, I'm drafting a new intro section for my cost-effectiveness updates linking to your updated numbers. Thanks again for doing the analysis!

Comment by abrahamrowe on Opinion: Estimating Invertebrate Sentience · 2019-11-08T20:55:41.927Z · score: 8 (4 votes) · EA · GW

"However, perhaps my largest surprise wasn’t an update toward or against a particular type of animal, rather it was based on the extent of conditioned learning behavior that is more or less exhibited by all taxa we considered, including single-celled organisms and animal bodies detached from brain communication, including the lower body of a mouse with a severed spine. While one could take this as weak evidence of widespread sentience, this updated me toward thinking many of these behaviors aren’t very impressive and they were thus largely disregarded in contemplating the positive case for sentience. "

Marcus, is there any chance you could elaborate on why you leaned one way on this vs the other? I don't have a clear sense of what I should take away from that, so I'd be curious what your reasoning was.


I'd also be interested in all of your thoughts on what exactly a percentage probability of valenced experience (or whatever the morally relevant mind-stuff should be called) is - obviously, they aren't that close to the fact of whether or not these organisms have valenced experience (which, unless the world is very strange, should be 1 or 0 for all things)

It seems more like they are statements about how you'd make a bet, or something like "confidence in the approach * results from the approach", or something else about the approach and prioritization. I'm curious how you were defining these probabilities to yourselves, and how definitions would impact their usefulness in cost-effectiveness analyses? i.e. if we were doing a cost-effectiveness estimate, and treating these as confidence * results, I might weight my confidence in this method higher than using my intuitions, but still include other approaches like intuition in my estimate because it theoretically gives me a more accurate model of my current knowledge. But, with a different definition I might just use these numbers.

Comment by abrahamrowe on Would a reduction in the number of owned cats outdoors in Canada and the US increase animal welfare? · 2019-10-26T17:48:49.828Z · score: 13 (9 votes) · EA · GW

I'll ask whoever runs the site to update my piece cited in this with a note that the cost-effectiveness estimates might be based on bad estimates of cat impact.

Additionally, my cost-effectiveness estimates were only for the US - it is probably most cost-effective to work on cat predation in countries like the UK where a much higher percentage of outdoor cats are owned.

I find the comments about rodents/birds interesting, but mostly irrelevant to the discussion of cat predation, and find framing addressing cat predation and improving rodent welfare as competing aims very strange. I'm going to refer to rodents below, but it could apply to any animals killed by cats. It doesn't seem obvious that the causal chain we should care about is stopping cat predation causing painful rodent deaths; instead, we should consider both rodenticides causing painful rodent deaths and cat predation causing painful rodent deaths to be important issues.

For there to be a coherent argument to not address cat predation, you would need to demonstrate not only that rodenticides more painful than death via cats, but have a picture of the average rodent's life after the moment they might have been killed by a cat. Since any rodent who is killed by a cat definitionally would have lived longer had it been killed by a rodenticide instead, the rodent is going to accumulate further positive and negative experiences during its life before being killed. Even if rodenticides are twice as painful, it seems reasonable to expect a prolonged life to often be good, and outweigh that.

Regardless, this doesn't seem like an argument against addressing cat predation - its an argument that the most effective way to address rodent suffering might be both addressing cat predation and addressing painful rodenticides.

A human analogy might be: we shouldn't address malaria because someone dying from malaria, if saved, might die from another more painful disease later. I think what follows from that is that we should address malaria and the other painful disease. Not that we should let malaria kill the person / say that it is unclear if malaria is good or bad. Addressing malaria is clearly good, as is reducing cat predation. There might be unintended effects of both that also need to be addressed. But it doesn't mean that addressing those things has an unclear sign.

My point is, cat predation is clearly bad for rodents. Rodenticides are also clearly bad. We should probably address both things, and be aware of the effect of only addressing one, but not that we shouldn't address either. Broadly, this seems to apply to wild animal welfare issues in general - the downstream effects are really really complicated, but by doing monitoring during interventions to see unanticipated effects, and addressing those as they come up, we probably make more progress than just pointing out the complexity.

I guess the question that is raised is, how far down should we care about downstream effects from interventions, as opposed to just monitoring interventions and addressing effects as they arise.

Comment by abrahamrowe on List of ways in which cost-effectiveness estimates can be misleading · 2019-08-21T17:51:57.258Z · score: 7 (6 votes) · EA · GW

Another issue is if multiple charities are working on the same issue, and cooperating, there might be times when a particular charity actively chooses to take less cost-effective actions in order to improve movement wide cost-effectiveness. This happens frequently with the animal welfare corporate campaigns. For example:

Charity A has 100 good volunteers in City A, where Company A is headquartered. To run a campaign against them would cost Charity A $1000, and Company A uses 10M chickens a year. Or, they could run a campaign against Company B in a different city where they have fewer volunteers for $1500.

Charity B has 5 good volunteers in City A, but thinks they could secure a commitment from Company B in City B, where they have more volunteers, for $1000. Company B uses 1M chickens per year. Or, by spending more money, they could secure a commitment from Company A for $1500.

Charities A and B are coordinating, and agree that Companies A and B committing will put pressure on a major target (Company C), and want to figure out how to effectively campaign.

They consider three strategies (note - this isn't how the cost-effectiveness would work for commitments since they impact chickens for longer than a year, etc, but for simplicity's sake):

Strategy 1: They both campaign against both targets, at half the cost it would be for them to campaign on their own, and a charity evaluators views the victories as split evenly between them.

Charity A cost-effectiveness: (5M + 0.5M Chickens / $500 + $750) = 4,400 chickens / dollar

Charity B is also 4,400 chickens / dollar.

$2500 total spent across all charities

Strategy 2: Charity A targets Company A, and Charity B targets Company B

Charity A: 10,000 chickens / dollar

Charity B: 1,000 chickens / dollar

$2000 total spent across all charities

Strategy 3: Charity A targets Company B, Charity B targets Company A

Charity A: 667 chickens / dollar

Charity B: 6696 chickens / dollar

$3,000 total spent across all charities

These charities know that a charity evaluator is going to be looking at them, and trying to make a recommendation between the two based on cost-effectiveness. Clearly, the charities should choose Strategy 2, because the least money will be spent overall (and both charities will spend less for the same outcome). But if the charity evaluator is fairly influential, Charity B might push hard for less ideal Strategies 1 or 3, because those make its cost-effectiveness look much better. Strategy 2 is clearly the right choice for Charity B to make, but if they do, an evaluation of their cost-effectiveness will look much worse.

I guess a simple way of putting this is - if multiple charities are working on the same issue, and have different strengths relevant at different times, it seems likely that often they will make decisions that might look bad for their own cost-effectiveness ratings, but were the best thing to do / right decision to make.

Also, on the matching funds note - I personally think it would be better to assume matching funds are truly match rather than not. I've fundraised for maybe 5 nonprofits, and out of probably 20+ matching campaigns in that period, maybe 2 were not truly matches. Additionally, often nonprofits will ask major donors to match funds as a way to encourage the major donor to give more (e.g. "you could give $20k like you planned, or you could help us run our 60k year end fundraiser by matching 30k" type of thing). So I'd guess that for most matching campaigns, the fact that it is a matching campaign means there will be some multiplier on your donation, even if it is small. Maybe it is still misleading then? But overall a practice that makes sense for nonprofits to do.

Comment by abrahamrowe on 35-150 billion fish are raised in captivity to be released into the wild every year · 2019-04-03T14:10:57.247Z · score: 1 (1 votes) · EA · GW

Thanks! That makes sense.

Comment by abrahamrowe on 35-150 billion fish are raised in captivity to be released into the wild every year · 2019-04-02T19:05:11.144Z · score: 4 (3 votes) · EA · GW

That makes sense - thanks for sharing these. I'm honestly surprised the icefish count is so low, but that's just because it seems popular as a dish and requires a lot of fish. One other theory - is there much information on the fishmeal market? It seems possible that the statistics (I didn't look too far into methods so this might be wrong) are representing fish sold (or leaving facilities) and that hatcheries are processing fish into fishmeal on site and using it to feed fry and fingerlings? Just a thought about other ways lots of fish might be produced but not represented in counts - especially if the methods for counting are different.

Comment by abrahamrowe on 35-150 billion fish are raised in captivity to be released into the wild every year · 2019-04-02T15:20:26.946Z · score: 14 (5 votes) · EA · GW

Hey Saulius!

This is awesome! I have a few questions:

-Can we make any inferences about what percent of wild-caught fish were originally stocked in specific areas? Or has any research been done (via tagging, genetic markers, species, etc) to try to estimate that? I guess a question would be does reducing the number of stocked fish in commercial fisheries have an impact on the commercial fishing industry that we'd expect to help animals in other ways (e.g. if it made it less commercially viable to do commercial fishing if there were fewer stocked fish it might in turn reduce the fishing of truly wild fish). While the impact of that on wild animals is unclear, it seems like a consideration.

-On the large numbers of juvenile fish that mysteriously don't seem to be making it to adulthood - is it possible this is a species specific thing? I know in China, there is a dish called 银鱼 (silverfish) that is just dozens or hundreds of fry in a bowl (also called whitebait?). It looks like they are called Icefish in English - - I wonder if the stats are somehow not accounting for fish being eaten at a younger age or stocking specifically for whitebait dishes? Also, it looks like whitebait is eaten a ton of places -

Another possibility is tons of young fish are being used for fishmeal or stocked to feed other fish? They might not make it into stats about fish produced then.

Regardless, thanks for doing all these pieces - they've all been really informative and needed for way too long!

Comment by abrahamrowe on A Research Agenda for Establishing Welfare Biology · 2019-03-18T03:16:17.209Z · score: 2 (2 votes) · EA · GW

I totally agree - they also often help identify where more research is needed (like seeing which numbers are the hardest to lock down).

Comment by abrahamrowe on A Research Agenda for Establishing Welfare Biology · 2019-03-17T19:16:05.602Z · score: 1 (1 votes) · EA · GW


Comment by abrahamrowe on A Research Agenda for Establishing Welfare Biology · 2019-03-17T19:15:52.416Z · score: 3 (3 votes) · EA · GW

My personal opinion is that it is pretty much impossible to make claims at this point about the sign of many animals’ lives without significantly more research. I think the arguments regarding welfare and life history strategy are compelling prima facie, but that might not be enough evidence for action immediately, and instead indicates it is a high priority area for study (which is why we have so much life history work planned this year). Models like the ones you linked here are interesting and provide some insight, but also have huge assumptions built in that significantly alter the results depending on the author's views on some critical issue (scoring relative utility of subjective experiences, weighting based on the square root of neurons, and a sentience multiplier), and also don't account for variations in season, climate etc., that would probably alter those numbers massively as well.

My personal guess is that we are quite a ways off from being able to do this comprehensively (at least a few years) for any particular arthropod population, not including discounts that might be made based on number of neurons or whatever features we think might be important. And we are probably much further out from being able to state with certainty which of those features are important, and how much we should discount on the basis of them (if at all).

Either way, academic buy-in is going to be crucial, which is why we are so focused on academic outreach, and doing research that will help us understand what early academic work we should prioritize.

Thanks for your research! It was interesting to see!

Comment by abrahamrowe on Why we look at the limiting factor instead of the problem scale · 2019-02-02T19:59:48.503Z · score: 6 (3 votes) · EA · GW

That makes a lot of sense. Maybe one way of framing scale + cost-effectiveness could be "how long will a particular cost-effectiveness be applicable in the real world?", and then two ways of describing that cost-effectiveness are either incorporating costs to raise these limits or not.

In either case, I definitely agree that these should be considered. One other thought - it seems like in certain ways, a donation to a charity will account for their efforts to raise limits, to some extent. I don't know enough about how ACE does cost-effectiveness analysis (and obviously the degree to which this information is incorporated would definitely depend on that), but I could imagine that if you make a statement like "a donation of $100 to The Humane League will help reduce the suffering of X animals", in a complete assessment of that donation, some of that funding would be going to their development department (raising the amount of funding available), some might be going to volunteer cultivation (maybe volunteer capacity is another limiting factor).

So the issue is more that while the outcome per dollar we are looking at is based on historical performance, over time that outcome per dollar is actually worse because some of that funding was going towards raising limits, and actually would need to be applied to animals not yet helped, if that makes sense.

Either way, I'm really interested in this - since reading it, I've been thinking of how I can incorporate this kind of thinking about cost-effectiveness into my organization - it seems tricky, but definitely worth doing a lot more of. Thanks for posting it!

Comment by abrahamrowe on Why we look at the limiting factor instead of the problem scale · 2019-01-28T22:31:49.742Z · score: 13 (10 votes) · EA · GW

This is cool! There are definitely limiting factors on working on an issue, but that doesn't mean that you shouldn't focus on that cause, but that part of the cost-effectiveness calculation will be how much it costs to raise those limits. In the 1970's and 80's, the talent pool for working on farmed animal advocacy, for example, was much smaller. But if we hadn't worked on it, and built up a better talent pool, brought in more donors, etc., we'd still be in that position today, and wouldn't have the capacity we have now. The scale of a problem is important because it is something true independent of the state of the movement. Limiting factors are not that - Wild Animal Initiative (where I work), for example, is pursuing academic outreach on wild animal welfare because it will help us in the long run to address these limits, by growing the talent pool etc. AI alignment research probably had a talent pool of basically 0 only a few years ago. Does that mean that no one should have started working on it at that point?

Regardless, you can just update your cost-effectiveness estimates but factoring in the costs to raise these limits.

E.g. it currently costs X dollars to help Y wild animals, up to 1000*Y, at which point some limiting factor stops us from helping more wild animals.

We can increase that limit at a cost of Z per Y more animals (perhaps through advocacy to bring in new talent or donors, or to improve the logistical limit).

The real cost-effectiveness is not X/Y dollars per animal, it is:

X/Y, Y<=1000

(X+Z)/Y, Y>1000

Given this, it is still possible that working on wild animals is really cost-effective. Look at the sheer number of invertebrates negatively impacted by insecticides, for example. If we can develop a tractable intervention in ~2 years to help them, it is possible that in that time, we can spend a little more to improve some of these limits as well, and over the whole period, have a really cost-effective intervention overall.

Similarly, in your surgery example, when you hit a limit on surgeries you can provide due to the number of surgeons, you can pay more to train more surgeons (or address the limit however you're able too). Obviously this lowers the cost-effectiveness, but for many interventions, it still might be a good option at the higher cost.

I guess my thought is, the problem scale definitely is super important. Limiting factors matter because they change the cost-effectiveness. But since they are mutable, they shouldn't be viewed as hard barriers to working on an issue. And it seems that regardless of what the limits are, or what the scale is, what we actually should be looking at is the cost-effectiveness of improving the issue, and given the above, limiting factors are a consideration within that, as is scale (given that scale is a limit on a certain cost-effectiveness being applicable - e.g. including costs to increase limits, say in the future we can help farmed animals at $Z per animal and wild animals at $Z/3 per animal, so we might want to help wild animals until we run out of wild animals to help, and then focus on the next best thing, which might be farmed animals (obviously an oversimplified example)).

Comment by abrahamrowe on Announcing Wild Animal Initiative · 2019-01-26T14:59:17.276Z · score: 4 (4 votes) · EA · GW

To clarify one thing - when we refer to academic outreach, we mean outreach to academics in the hard sciences, specifically working on building welfare biology as an academic field. UF and WASR both had at least one staff dedicated to this throughout the last year. UF has a writeup on their efforts here -, and WASR's approach included doing a request for grant proposals etc. for academics.

I don't think there will be significant overlap - we are trying a new approach targeting early career academics, and offering them funding to work on the outreach themselves instead of us. From what I understand of AE's program this is pretty different. We are also primarily operating in the US, while AE has less of a presence here, and seems to me to have generally worked with European academics. Regardless, we plan on working to coordinate with them to the greatest extent possible to limit overlap.