Posts

The scale of direct human impact on invertebrates 2020-09-02T13:22:47.643Z
Insects raised for food and feed — global scale, practices, and policy 2020-06-29T13:57:31.653Z
Notes on how a recession might impact giving and EA 2020-03-13T18:17:24.865Z
Global cochineal production: scale, welfare concerns, and potential interventions 2020-02-11T21:33:20.225Z
Should Longtermists Mostly Think About Animals? 2020-02-03T14:40:23.242Z
Uncertainty and Wild Animal Welfare 2019-07-19T13:33:51.533Z
A Research Agenda for Establishing Welfare Biology 2019-03-15T18:24:51.099Z
Announcing Wild Animal Initiative 2019-01-25T17:23:30.758Z

Comments

Comment by abrahamrowe on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T22:45:46.282Z · EA · GW

One thing that is easy to forget is that we are already dramatically intervening in natural ecosystems without paying attention to the impact on animals. E.g. any city, road, mine, etc. is a pretty massive intervention. Or just using any conventionally grown foods probably impacts tons of insects via pesticides. Or contributing to climate change. At a minimum, ensuring those things are done in a way that is kinder way for animals seems like a goal that anyone could be on board with (assuming it is an effective use of charitable money, etc.).

I do also think that most things like you describe are already broadly done without animal welfare in mind. For example, we could probably come up with less harmful deer population management strategies than hunting, and we've already attempted to wipe out species (e.g. screwworms, probably mosquitos at some point in the future).

Comment by abrahamrowe on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T22:39:21.837Z · EA · GW

I think there were a few other philosophy papers that were sort of EA aligned I think, but yeah, basically just those 2. So maybe it was the default by default.

Comment by abrahamrowe on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T22:37:48.547Z · EA · GW

Is there an accessible summary anywhere of the research underlying this shift in viewpoint?

I don't think there has been a summary, but that sounds like a good thing to write. But to quickly summarize things that are probably most informing this:

  1. I'm less confident in negative utilitarianism. I was never that confident in it, and feel much less so now. I don't think this is due to novel research, but just my views changing upon reflecting on my own life. I still broadly probably have an asymmetric view of welfare, but am more sympathetic to weighing positive experiences to some degree (or maybe have just become a moral particularist). I also think if I am less confident in my ethics (which them changing over time indicates I ought to be), then taking reversible actions robust under a variety of moral frameworks that seem plausibly good seems like a better approach.
  2. I feel a lot less confident that I know how long most animals' subjective experiences last, in part due to research like Jason Schukraft on the subjective experience of time. I think the best argument that most animal lives are net-negative is something like "most animals die really young before they accumulate any positive welfare, so the painfulness of their death outweighs basically everything else." This seems less true if their experiences are subjectively longer than they appear. I also have realized that I have a possibly bad intuition that 30 years of good experiences + 10 years of suffering is better than 3 minutes of good experiences and 1 minute of suffering, partially informing this.
  3. I think learning more about specific animals has made me a lot less confident that we can broadly say things like "r-selectors mostly have bad lives." 

Would you say this is a general shift in opinion in the WAW field as a whole?

When I started working in wild animal welfare, basically no one with a bio/ecology background worked in the space. Now many do. Probably many of those people accurately believe that most things we wrote / argued historically were dramatic oversimplifications (because they definitely were). I'm not sure if opinion is shifting, but there is a lot more actual expertise now, and I'd guess that many of those new experts have more accurate views of animals' lives, which I believe ought to incline one to be a least a bit skeptical of some claims made early in the space.

Comment by abrahamrowe on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T21:51:00.529Z · EA · GW

Animal Charity Evaluators is the 6th, which did some surveying and research work in the space. I guess that counts. My phrasing was ambiguous. There have been 6, I co-founded 2 (UF and WAI), worked at another (Rethink Priorities).

Comment by abrahamrowe on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T12:41:07.723Z · EA · GW

I think that Toward Welfare Biology was, until maybe 2016 or so, the default thing people pointed to (along with Brian Tomasik's website), as the introductory text to wild animal welfare. I saw it referenced a lot, especially when I started working in the space.

Comment by abrahamrowe on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T12:37:11.850Z · EA · GW

I co-founded 2 of and have worked at another of the 6 organizations that have worked on wild animal welfare with an EA lens. I've been writing or thinking about these things since around 2014. Here are a handful of thoughts related to this:

  • I think almost none of the people working in the space professionally are full on negative utilitarians. Probably many are very focused on reducing suffering (myself included), but pretty much everyone really likes animals - that's why they work on making their lives better!
  • In 2018, I helped organize the first wild animal welfare summit for the space. We unanimously agreed that this perspective was an unproductive one, and I don't think any group working in the space today (Wild Animal Initiative, Animal Ethics, Rethink Priorities) holds a view that is this strong. So I think in general, the space has been moving away from anything like what you're discussing.
  • Speaking from personal experience, I was much more sympathetic to this sort of view when I first got involved. Wild animal suffering is really overwhelming, especially if you care about animals. For me, it was extremely sad to learn how horrible lives are for many animals (especially those who die young). But, the research I've done and read has both made me a lot less sympathetic to a totalizing view of wild animals of this sort (e.g. I think many more wild animals than I previously thought live good lives), and less sympathetic to taking such a radical action. I think that this problem seems really hard at first, so it's easy to point to an intervention that provides conclusive results. But, research has generally made me think that we are both wrong about how bad many (though definitely not most) animal lives are, and how tractable these problems are. I think there are much more promising avenues for reducing wild animal suffering available.
  • People on the internet talk about reducing populations as being the project of wild animal welfare. My impression is that most or all of those folks don't actually work on wild animal welfare. And the groups working in the space aren't really engaged in the online conversation, probably in part because of disagreement with this view.
  • I hope that there are no negative utilitarians who hold 0 doubts about their ethics. I guess if I were a full negative utilitarian, or something, I probably wouldn't be 100% confident in that belief. And given that irreversibility of the intervention you describe, if I wasn't 100% confident, I'd be really hesitant to do anything like that. Instead, improving welfare is acceptable under a variety of frameworks, including negative utilitarianism, so it seems like we'd probably be inclined to just improve animal's lives.

Overall, I think this concern is pretty unwarranted, though understandable given the online discussion. Everyone I know who works on wild animal welfare cares about animals a lot, and the space has been burdened by these concerns despite them not really referring to views held by folks who lead the space.

Also, one note:

[they will] conclude that the majority of animals on Earth would be better off dead

I think it's pretty important to differentiate between people thinking animals would be better off dead (a view held by no one I know), and thinking that some animals who will live will have better lives if we reduce juvenile mortality via reduced fertility, and through the latter, that we would prevent a lot of very bad, extremely short lives. We already try to non-lethally reduce populations of many wild animals via fertility control (e.g. mosquitos, screwworms, horses, cats). These projects are mainstream (outside of EA), widely accepted as good, and for some of them, done for the explicit benefit of the animals who are impacted. 

Comment by abrahamrowe on EA's abstract moral epistemology · 2020-10-22T13:24:28.458Z · EA · GW

I think it's plausible that some major funders stopped funding some groups  (like farm sanctuaries) in favor of ACE top charities, for example, but I doubt that it has happened with large numbers of smaller donors. But, it's hard to know how much EA is responsible for this. For example, when GFI was founded, I think a lot of people found it to be really compelling, independent of it be promising from an EA lens. While it's a fairly EA-aligned organization, in a world without EA, something like it probably would have been founded anyway, and because it  compelling, lots of donors might have switched from whatever they were donating to before to donating to GFI. My impression is also that a lot of funding that has left charities is going into investing in clean / plant-based meat companies. I also expect that would have happened had EA not existed.

Comment by abrahamrowe on EA's abstract moral epistemology · 2020-10-21T13:13:40.702Z · EA · GW

I volunteered but didn't work in the animal advocacy space prior to EA (starting in maybe 2012 or so), but have worked at EA-aligned animal organizations, and been on the board of non-EA aligned (but I think very effective) animal organizations in recent years. Probably someone who worked more in the space prior to ~2014 or 2015 could speak more to what changed in animal advocacy from EA showing up.

The relevant quote:

The animal policy summit I attended in February permitted time for casual conversation among a variety of activists. These included sanctuary managers, directors of non-profits dedicated to ending factory farming, vegan educators, directors of veganism-oriented, anti-racist public health and food access programs, etc. It also included some academics. As some of the activists were talking, they got on to the topic of how charitable giving on EA’s principles had either deprived them of significant funding, or, through the threat of the loss of funding, pushed them to pursue programs at variance with their missions. There was general agreement that EA was having a damaging influence on animal advocacy.

I think that EA has definitely had some negative impact on animal advocacy, but overall has been very good for the space.

The Good

There is definitely way more funding in the space due to EA, and not less - OpenPhil makes up a massive percentage of overall animal welfare donations, and gives a large amount to groups who aren't purely dedicated to corporate welfare campaigns (though the OpenPhil gift itself might be restricted to welfare campaigns). Mercy For Animals, Animal Equality, etc., receive large gifts from OpenPhil and do vegan education / work to end factory farming, and not just reform it. ACE has probably brought in other EAs who would not have otherwise donated to animal welfare work (I'd guess at least a few million dollars a year). 

I think it is plausible that over the last few years, EA-aligned donors have stopped donating to some non-EA aligned organizations. Animal advocacy charities are generally very top-heavy — a huge percentage of donations are coming from a few people. If a couple of those people change where they are donating, it might significantly impact a charity, especially a smaller one. But, overall I'd guess that this isn't for purely EA reasons — lots of large donors in the space are investing in plant-based meat companies, for example, and might have chosen to do that independently of EA.

Also, EA has really opened up what I believe are the most promising avenues for future animal advocacy - addressing wild animal welfare (in a species-neutral way) and addressing invertebrate welfare. I think both areas would basically be impossible to fund in the short-term if EA funding wasn't available.

The Bad

I think the compelling critique of how EA has negatively impacted animal advocacy is something similar to the institutional critique the author presents. For example, at least early on, the focus on corporate campaigns meant that activities like community building were relatively neglected. I feel uncertain about the long-term impact of this, but I'd wager that most EAA organizations in the US, for example, have a lot more trouble getting volunteers to events than they did maybe 7-10 years ago or so. I think it's plausible that there are similar programmatic shifts away from activities that didn't have obvious impact that will harm the effectiveness of organizations down the line. Also, as the author says, this sort of critique could be viewed as an internal critique of activities, as opposed to a critique of EA as a whole.

There are probably some highly effective animal advocacy organizations totally neglected by EA (at least compared to ACE top charities). I also think that an GiveWell-style apples-to-apples comparison of different charities doing a similar and related activity doesn't necessarily make sense for, say, organizations doing corporate campaigns, since the organizations are highly coordinated. But again, this seems like an internal critique.

I see ending factory farming / vegan advocacy as likely deeply aligned with EA. I think that the animal advocacy space really struggled to make progress on these issues over the past few decades, but has made more progress in the last 5 years. I don't know if this is due to plant-based meats becoming more popular, EA showing up, or something else, but broadly, we're doing better now than we were before, I think, at helping animals.

The "remark on institutional culture" is a pretty good critique of EA, though I don't know what to conclude from it. But, if the essay is focused on EAA specifically, I think that comment is a lot less relevant, as I'd guess as a whole, EAA is much more open to social justice / non-EA ethics, etc. than some other communities in EA.

Overall, most this critique just seems to be that the author just disagrees with many people in EA about ethics and metaethics.

Comment by abrahamrowe on When does it make sense to support/oppose political candidates on EA grounds? · 2020-10-16T14:32:20.355Z · EA · GW

I really appreciated this post! Thanks for writing it. I also really appreciated the original post and am a bit bummed it got buried. I also want to note that I find it odd that post got downvoted (possibly for being explicitly partisan?) vs posts like this, which don't explicitly claim to be partisan / engaging in politics but I think are actually extremely political.

One thought, slightly unrelated to the question of whether or not there are good EA grounds for supporting / opposing political candidates (and I think it's highly likely that there are):


Effective altruism has long had a culture of shying away from explicit engagement in partisan politics

I think one really useful and accurate idea from the social justice community is the idea that you can't be neutral on many political issues. This seems like it ought to be even more compelling from a consequentialist perspective as well, as inaction on certain political opportunities (not exclusively, but definitely including removing Trump from office / Joe Biden winning the 2020 election in the US) might contribute directly to the worse outcome. The status quo is already a manifestation of political positions, so if you're not engaging in changing the status quo, you are taking whatever political positions built it.

For example, I live in Pennsylvania, and theoretically my vote might matter in the US presidential election this year. I can vote for Joe Biden, not vote (or vote for a third party), or vote for Donald Trump. I think it seems clear that the downside risk from Trump winning is very high compared to Joe Biden, and given that Trump will win if Joe Biden doesn't, there is almost as much risk in not voting. I think that I pretty clearly on (some kind of rough near-termist) consequentialist grounds should vote for Joe Biden, and probably should try to get as many people as possible to do the same.

I think there are probably lots of good reasons to think that dollars directed by the EA community shouldn't go to political candidates as a general rule of thumb (though there are probably really good giving opportunities at times), but broadly, as a community interested in ethics, it seems like we are inherently taking fairly strong political positions, but then not really willing to discuss them or make them explicit .

This was a bit of a ramble because my thoughts aren't well-formed, but I think it is pretty likely that attempting to be "neutral" on political issues is close to being as bad as taking the political position that will lead to the worse outcome, or something along those lines.

Comment by abrahamrowe on EricHerboso's Shortform · 2020-09-09T02:46:59.049Z · EA · GW

Thanks for sharing your thoughts! I guess part of the reason I feel more strongly that this kind of comment ought not to be upvoted is that EricHerboso seemed to bring up the Facebook thread not to open a debate on its content, but to point out that the behavior of some of the Facebook commentors harmed EAs or EA adjacent organizations through putting an emotional toll on people, and that this kind of behavior is explicitly costing EA. That seems like a really important thing to discuss - regardless of what you think of the content of the thread, the content EricHerboso refers to in it negatively impacted the movement.

Dale's comment feels unnecessarily trollish, but also tries to turn the thread into a conversation about what I see as an unrelated topic (the rules of conduct in a random animal rights Facebook group). It vaguely tries to tie back to the post, but mostly this seems like a weak disguise for trolling EricHerboso.

Comment by abrahamrowe on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-09T02:31:56.248Z · EA · GW

Thanks for elaborating!

I think that it seems like accusations of EA associations with white supremacy of various sorts come up enough to be pretty concerning. 

I also think the claims would be equally concerning if JoshYou had said "white supremacists" or "really racist people" instead of "white nationalists" in the original post, so I feel uncertain that Buck stepping back the original post actually lessens the degree we ought to be concerned?

I also have other issues with the rest of the comment (namely being constantly worried about communists or nazis hiding everywhere, and generally bringing up nazi comparisons in these discussions, tends to reliably derail things and make it harder to discuss these things well, since there are few conversational moves as mindkilling as accusing the other side to be nazis or communists. It's not that there are never nazis or communists, but if you want to have a good conversation, it's better to avoid nazi or communist comparisons until you really have no other choice, or you can really really commit to handling the topic in an open-minded way.)

I didn't really see the Nazi comparisons (I guess saying white nationalist is sort of one, but I personally associate white nationalism as a phrase much more with individuals in the US than Nazis, though that may be biased by being American).

I guess broadly a trend I feel like I've seen lately is occasionally people writing about witnessing racism in the EA community, and having what seem like really genuine concerns, and then those basically not being discussed (at least on the EA Forum) or being framed as shutting down conversation.

Comment by abrahamrowe on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-09T00:13:32.954Z · EA · GW

I just upvoted this comment as I strongly agree with it, but also, it had  -1 karma with 2 votes on it when I did so. I think it would be extremely helpful for folks who disagree with this, or otherwise want to downvote it, to talk about why they disagree or downvoted it.

Comment by abrahamrowe on The scale of direct human impact on invertebrates · 2020-09-07T18:52:48.582Z · EA · GW

Could you say more about the epistemic status of agricultural pesticides as the largest item in this category, e.g. what chance that in 3 years you would say another item (maybe missing from this list) is larger?

(Probabilities are ballpark guesses, not rigorous)

Just in terms of insects impacted because trying to estimate nematodes or other microscopic animals gets really tricky:

Today: >99% agricultural pesticides are the largest direct cause of insect mortality

3 years: >98% agricultural pesticides are the largest direct cause of insect mortality

20 years: >95% likely agricultural pesticides are the largest direct cause of insect mortality

The one possible category I could imagine overtaking agricultural pesticides are insects raised for animal feed. I think it is fairly unlikely farming insects for human food will grow substantially, but much more likely that insects raised for poultry feed will grow in number a lot, and even more likely that insects raised for fish feed will grow a lot. There is a lot of venture capital going into raising insects for animal feed right now, so it seems at least somewhat likely some of those projects will take off (though there are cost hurdles they haven't cleared yet compared to other animal feeds. Replacing fishmeal with insects seems even more likely because fishmeal is already a lot more expensive than grain feed. 

Replacing ~40% of fishmeal with black soldier flies would put insect deaths from farming at the lower end of my current estimate for the scale of impact from agricultural pesticides. So I guess if estimates of agricultural pesticide impact are too high for an unknown reason (maybe insect populations collapse in the near future or something), there is a definite possibility, but not a big one, that insect farming could overtake pesticides in terms of deaths caused.

And what ratio do you see between agricultural pesticides and other issues you excluded from the category (like climate change and partially naturogenic outcomes)?

I am very uncertain about this. Brian Tomasik estimates the global terrestrial arthropod population to be 10^17 to 10^19 individuals, which would be 10 to 100,000 times the animals impacted by pesticides. Plausibly basically all of them could be impacted by climate change, but it's hard to know whether or not the sign of those impacts will be negative. I imagine that most the impact from climate change, for example, would come from populations shifting - e.g. there suddenly are far fewer of an animals with survival strategy X, and a lot more of insects with survival strategy Y, and that change leads to a lot more positive or negative welfare. That being said, I think we possibly should expect ecosystems changing rapidly to be on average bad for the animals who live through that change or are born after it, at least in the short term.

I think one other excluded area I excluded that could be huge is nematodes and other microscopic invertebrates. There are obviously questions that ought to be raised about their likelihood of having valenced experiences, but as of writing I can purchase 250 million nematodes for biological control on Amazon for $135 USD. Nematodes are widely used in agriculture, and some agricultural pesticides possibly impact nematodes, implying that they'd possibly kill wild nematodes. So it seems like there is some possibility that nematodes impacted by agricultural pesticides outweigh insects impacted by them.

Comment by abrahamrowe on EricHerboso's Shortform · 2020-09-04T14:59:17.888Z · EA · GW

I downvoted this because it seems pretty clear that the author was referencing other aspects of the Facebook thread, and this felt belittling instead of engaging with the author's overall post.

Comment by abrahamrowe on Insects raised for food and feed — global scale, practices, and policy · 2020-07-22T18:18:40.150Z · EA · GW

Thanks for the comment,

I think I agree with everything you're saying here, and that makes sense on how conversion efficiency would work for insectmeal vs animal feed.

A few points:

  • It is definitely unclear if insectmeal will be cost-competitive with either fishmeal or grain feed. I think insectmeal as an alternative to fishmeal has a lot more potential for a variety of reasons - I saw a pitch deck to an investor where a company said it was targeting 1 to 1.5 Euro / kg dry weight for black soldier fly larvae fed on animal waste once they scaled up (though it was a pitch deck, so probably optimistic). If producers can actually hit that target, then it seems plausible some fishmeal could be replaced.
  • I think there is some reason to believe that fisheries, etc., would be actually less willing to pay for insectmeal than fishmeal, since it is new, etc., so the price could need to be even lower than that of fishmeal for insectmeal to take off.
  • There is a large amount of venture capital going into large scale insect farms right now. It's possible that could end up subsidizing the cost of insectmeal in the short-term, and drive it down significantly, only for it later to increase if this source of funding goes away.
Comment by abrahamrowe on Concern, and hope · 2020-07-16T19:00:27.409Z · EA · GW

Thanks for making this post Will -

I'll admit that since the SSC stuff happened, I've been feeling a lot further from EA (not necessarily the core EA ideas, but associating with the community or labeling myself as an EA), and I felt genuinely a bit scared learning through the SSC stuff about ways in which the EA community overlaps with alt-right communities and ideas, etc. I don't know what to make of all of it, as everyone I work with in EA regularly are wonderful people who care deeply about making the world better. But I feel wary and nervous about all this, and I've also been considering leaving the forum / FB groups just to have some space to process what my relationship with EA ought to be external to my work.

I see a ton of overlap between EA in concept and social justice. A lot of the dialogue in the social justice community focuses on people reflecting on their biases, and working to shift out of a lens on the world that introduces some kinds of biases. And, broadly folks working on social justice issues are trying to make the world better. This all feels very aligned with EA approaches, even if the social justice community is working on different issues, and are focused on different kinds of biases.

I've heard (though don't know much about it), that to some extent EA outreach organizations stopped focusing on growth and has focused more on quality in some sense a few years ago. I wonder if doing that has locked in whatever norms were present in the community prior to that, and that's ended up unintentionally resulting in a fair amount of animosity toward ideas or approaches to argument that are outside the community's standards of acceptability? I generally think that one of the best ways to improve this issue is to invest heavily in broadening the community, and part of that might require work to make the community more welcoming (and not actively threatening) to people who might not feel welcome here right now.

Comment by abrahamrowe on X-risks to all life v. to humans · 2020-06-03T20:41:48.453Z · EA · GW

Nope - fixed. Thanks for pointing that out.

Comment by abrahamrowe on X-risks to all life v. to humans · 2020-06-03T20:01:30.221Z · EA · GW

Thanks for sharing this!

I happen to have made a not-very-good model a month or so ago to try to get a sense of how much the possibility of future species that care about x-risks impacts x-risk today. It's here, and it has a bunch of issues (like assuming that it will take the same amount of time from now for a new species that it took for humans to evolve since the first neuron, assuming that all of Ord's x-risks don't reduce the possibility of future moral agents evolving etc.), and possibly doesn't even get at the important things mentioned in this post.

But based on the relatively bad assumptions in it, it spat out that if we generally expect moral agents to evolve who reach Ord's 16% 100 year x-risk every 500 million years or so (assuming an existential event happens), and that most the value of the future is beyond the next 0.8 to 1.2B years, then we ought to adjust Ord's figure down to 9.8% to 12%.

I don't think either the figure / approach in that should be taken at all seriously though, as I spent only a couple minutes on it and didn't think at all about better ways to try to do this - just writing this explanation of it has shown me a lot of ways in which it is bad. It just seemed relevant to this post and I wasn't going to do anything else with it :).

Comment by abrahamrowe on Wild Animal Welfare Meetup (Spring 2020) · 2020-04-26T17:31:20.261Z · EA · GW

Yeah, it's interesting to see that across the board. My sense is that wild animal welfare work (and farmed animal work), are very much funding constrained. Relevant to this - Open Philanthropy doesn't currently fund EA wild animal welfare work.

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-04-14T13:37:48.885Z · EA · GW

Thanks for this. I think for me the major lessons from comments / conversations here is that many longtermists have much stronger beliefs in the possibility of future digital minds than I thought, and I definitely see how that belief could lead one to think that future digital minds are of overwhelming importance. However, I do think that for utilitarian longtermists, animal considerations might dominate in possible futures where digital minds don't happen or spread massively, so to some extent one's credence in my argument / concern for future animals ought to be defined by how much you believe in or disbelieve in the possibility and importance of future digital minds.

As someone who is not particularly familiar with longtermist literature, outside a pretty light review done for this piece, and a general sense of this topic from having spent time in the EA community, I'd say I did not really have the impression that the longtermist community was concerned with future digital minds (outside EA Foundation, etc). Though that just may have been bad luck.

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-04-14T13:26:54.410Z · EA · GW

Ah - you're totally right - that was an oversight. I'm working on a followup to this piece focusing more on what animal focused longtermism looks like, and talk about moral circle expansion, so I don't know how I dropped it here :).

Comment by abrahamrowe on Why I'm Not Vegan · 2020-04-10T19:28:58.265Z · EA · GW

I appreciate your thoughtful response to my post, and think I unintentionally came across harshly. I think you and I likely disagree on how much to weight the moral worth of animals, and what that entails about what we ought to do. But my discomfort with this post is (I hope, though of course I have subconscious biases) is specifically with the non-clarified statements about comparative moral worth between humans and other species. I made my comment to clarify that the reason I voted this down is that I think it is a very bad community standard to blanket accept statements of the sort "I think that these folk X are worth less than these other folk Y" (not a direct quote from you obviously) without stating precisely why one believes that or justifying that claim. That genuinely feels like a dangerous precedent to have, and without context, ought to be viewed with a lot of skepticism. Likewise, if I made an argument where I assumed but did not defend the claim that people different than me are worth 1/10th people like me, you likely ought to downvote it, regardless of the value of the model I might be presenting for thinking about an issue.

One small side note - I feel confused about why the surveys of how the general public view animals are being cited as evidence in favor of casual estimations of animals' moral worth in these discussions. Most members of the public, myself included, aren't experts in either moral philosophy nor animal sentience. And, we also know that most members of the public don't view veganism as worthwhile to do. Using this data as evidence that animals have less moral worth strikes me as doing something analogous to saying "most people who care more about their families than others, when surveyed, seem to believe that people outside their families are worth less morally. On those grounds, I ought to think that people outside my family are worth less morally". This kind of survey provides information on what people think about animals, but in no way is evidence of the moral status of animals. But, this might be the moral realist in me, and/or an inclination toward believing that moral value is something individuals have, and not something assigned to them by others :).

Comment by abrahamrowe on Why I'm Not Vegan · 2020-04-10T13:17:12.945Z · EA · GW

While you're right that the Cambridge Declaration on Consciousness was signed by few people, they were mostly very prominent and influential researchers, which was the point of the thing. But yeah, it is weak evidence on its own, I agree.

I don't know of specific survey data, but based on both the declaration and its continued influence, and the wide variety of opinions, literature reviews, etc supporting the position, my impression is that there is somewhat of a consensus, though there are occasional outliers. I believe my "to some extent, consensus" accurately captures the state of the field. Though in either case it is beside the point since Jeff assumed them to be sentient for the post. Thanks for sharing! :)

Comment by abrahamrowe on Why I'm Not Vegan · 2020-04-09T18:18:44.190Z · EA · GW

I agree that I was assuming a certain moral framework in my post - I've updated it to refer explicitly to utilitarianism of some kind, since that's a fairly common view in EA.

Thanks for the moral trade idea!

Comment by abrahamrowe on Why I'm Not Vegan · 2020-04-09T16:46:49.880Z · EA · GW

Yeah, that's fair - I was not charitable in my original comment RE whether or not there is a rationale behind those estimates, when perhaps I ought to assume there is one. But I guess part of my point is that because this argument entirely hinges on a rationale, not providing it just makes this seem very sketchy.

While I don't think human experiences and animal experiences are comparable in this direct a way, as an illustration imagine me making a post that said, "I think humans in other countries are worth 1/10 of those in my own country, therefore it seems like more of a priority to help those in my own country", and providing no reasoning or clarification for that discount. You would be justified in being very skeptical of the argument I was making, and to view my argument as low quality, even though there might be a variety of other good reasons to prioritize helping those in my own country. I don't think that kind of statement is high enough quality on its own to be entertained or to support an argument. But at its core, that's the argument in this post. I'd be interested in talking about the reasons behind those discounts, but without them, there just isn't even a way to engage with this argument that I think is productive.

For the record, I generally don't think it is a major wrong to not be vegan, and wouldn't downvote / be this critical of someone voicing something along the lines of "I really like how meat tastes, so am not vegan," etc. I am more critical here because it is an attempt to make a moral justification of not eating a vegan diet, and I think that argument not only fails, but also doesn't attempt to defend or explain core premises and assumptions, especially when aspects of those premises seem contrary to some degree of scientific evidence / consensus, which strike me to broadly be taken seriously as part of the community norms.

That being said, I think it's fully possible there are good justifications for having such large discounts on the moral worth of animals, and those discounts are worth discussing. But that was glossed over here, which is why I am responding more critically.

Comment by abrahamrowe on Why I'm Not Vegan · 2020-04-09T14:40:05.481Z · EA · GW

I downvoted this, and would feel strange not talking about why:

I think there are lots of good reasons, moral or otherwise, to not be vegan - maybe you can't afford vegan food, or otherwise cannot access it. Maybe you've never heard of veganism. Maybe there are good reasons to think that the animal products you're eating aren't causing additional harm. Maybe you just like animal products a lot, and want to eat some, even though you know it is bad.

But I don't think this argument is a particularly good one, and doesn't engage with questions of animal ethics well:

1. "I think there's a very large chance they don't matter at all, and that there's just no one inside to suffer" - this strikes me (for birds and mammals at least) as a statement in direct conflict with a large body of scientific evidence, and to some extent, consensus views among neuroscientists (e.g. the Cambridge Declaration on Consciousness https://en.wikipedia.org/wiki/Animal_consciousness#Cambridge_Declaration_on_Consciousness). Though to be fair, you are assuming they do feel pain in this post.

2. Your weights for animals lives seem fairly arbitrary. I agree that if those were good weights to use, maybe the moral trade-offs would be justified, but if you're just saying, with little basis, that a pig has 1/100 human moral worth, I don't know how to evaluate it. It isn't an argument. It's just an arbitrary discount to make your actions feel justified from a utilitarian standpoint.

I also think these moral worth statements need more clarification - do you mean that while I (a human) feel things on the scale of -1000 to 1000, a pig only feels things on the scale of -10 to 10? Or do you mean a pig is somehow worth less intrinsically, even though it feels similar amounts of pain as me? The first statement I am skeptical of because of a lack of evidence for it, and the second seems just unjustifiably biased against pigs for no particular reason.


I generally think factory farms are pretty bad, and maybe as bad as torture. Removing cows from the equation, eating animal products requires 6.125 beings to be tortured per year per American (by the numbers you shared). I personally don't think that is a worthwhile thing to cause. Randomly assigning small moral weights to those animals to feel justified seems unscientific and odd.

I think it seems fairly clear that there is a strong case to be made, if you're someone who has the means and access to vegan food and are a utilitarian of various sorts, to eat at least a mostly vegan diet. No one has to be perfectly moral all the time, and I think it's probably okay (on average) to often not be perfectly moral. But presenting arbitrarily assigned discounts on lives until your actions are morally justified is a weak justification.

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-03-31T18:18:09.914Z · EA · GW

Thanks for linking!


Yeah, that's interesting. Clearly there is major decline in some populations right now, especially large vertebrates and birds. I guess the relevant questions are: will those last a long time (at least a few hundred years), and: is there complementary growth in other populations (invertebrates)? Especially if species that are succeeding are smaller on average than the ones declining, as you might expect there to be even more animals then. Cephalopod populations, for example, have increased since the 50s: https://www.cell.com/current-biology/fulltext/S0960-9822(16)30319-0

Comment by abrahamrowe on Estimates of global captive vertebrate numbers · 2020-02-18T18:55:37.426Z · EA · GW

This really awesome and helpful! Thanks Saulius!

One group that is probably pretty small but isn't listed here - animals in wildlife rehabilitation clinics: this page says 8k to 9k animals (I'm guessing mostly vertebrates?) enter clinics in Minnesota every year. If that scales by land area for the contiguous United States, that would be 270k - 305k animals per year in the US, so maybe a few million globally? But that's just a guess from the first good source I saw.


On pet shelters - I used to work at one, and every month, we reported our current animal population (along with a lot of other stats), to this organization - https://shelteranimalscount.org/ - I think their data could probably be used to get a very accurate estimate of animals currently in shelters in the US.

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-02-10T01:04:47.677Z · EA · GW

Yeah I think that is right that it is a conservative scenario - my point was more, the proposed future scenarios don't come close to imagining as much welfare / mind-stuff as might exist right now.

Hmm, I think my point might be something slightly different - more to pose a challenge to explore how taking animal welfare seriously might change the outcomes of conclusions about the long term future. Right now, there seems to be almost no consideration. I guess I think it is likely that many longtermists thinks animals matter morally already (given the popularity of such a view in EA). But I take your point that for general longtermist outreach, this might be a less appealing discussion topic.

Thanks for the thoughts Brian!

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-02-10T00:52:36.926Z · EA · GW

Yeah, the idea of looking into longtermism for nonutilitarians is interesting to me. Thanks for the suggestion!

I think regardless, this helped clarify a lot of things for me about particular beliefs longtermists might hold (to various degrees). Thanks!

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-02-10T00:50:07.324Z · EA · GW

That makes sense!

Comment by abrahamrowe on EA Animal Welfare Fund is looking for applications until the 6th of February · 2020-02-06T21:05:59.542Z · EA · GW

Thanks!

Comment by abrahamrowe on EA Animal Welfare Fund is looking for applications until the 6th of February · 2020-02-05T22:21:23.035Z · EA · GW

Hey Karolina,

Is the deadline at a specific time on February 6th, or before the 6th (i.e. EOD the 5th)? The wording is just slightly vague.

Thanks for all you do!

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-02-05T15:45:42.309Z · EA · GW

Thanks for the feedback - that's a good rule of thumb!

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-02-05T15:43:40.974Z · EA · GW

Thanks for laying out this response! It was really interesting, and I think probably a good reason to not take animals as seriously as I suggest you ought to, if you hold these beliefs.

I think something interesting is that this, and the other objections that have been presented to my piece have brought out is that to avoid focusing exclusively on animals in longtermist projects, you have to have some level of faith in these science-fiction scenarios happening. I don't necessarily think that is a bad thing, but it isn't something that's been made explicit in past discussions of long-termism (at least, in the academic literature), and perhaps ought to be explicit?


A few comments on your two arguments:


Claim: Our descendants may wish to optimize for positive moral goods.
I think this is a precondition for EAs and do-gooders in general "winning", so I almost treat the possibility of this as a tautology.

This isn't usually assumed in the longtermist literature. It seems more like the argument is made on the basis of future human lives being net-positive, and therefore good that there will be many of them. I think the expected value of your argument (A) hinges on this claim, so it seems like accepting it as a tautology, or something similar, is actually really risky. If you think this is basically 100% likely to be true, of course your conclusion might be true. But if you don't, it seems plausible that that, like you mention, priority possibly ought to be on s-risks.



In general, a way to summarize this argument, and others given here could be something like, "there is a non-zero chance that we can make loads and loads of digital welfare in the future (more than exists now), so we should focus on reducing existential risk in order to ensure that future can happen". This raises a question - when will that claim not be true / the argument you're making not be relevant? It seems plausible that this kind of argument is a justification to work on existential risk reduction until basically the end of the universe (unless we somehow solve it with 100% certainty, etc.), because we might always assume future people will be better at producing welfare than us.

I assume people have discussed the above, and I'm not well read in the area, but it strikes me as odd that the primary justification in these sci-fi scenarios for working on the future is just a claim that can always be made, instead of working directly on making lives with good welfare (but maybe this is a consideration with longtermism in general, and not just this argument).

I guess part of the issue here is you could have an incredibly tiny credence in a very specific number of things being true (the present being at the hinge of history, various things about future sci-fi scenarios), and having those credences would always justify deferral of action.

I'm not totally sure what to make of this, but I do think it gives me pause. But, I admit I haven't really thought about any of the above much, and don't read in this area at all.

Thanks again for the response!

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-02-04T15:16:42.669Z · EA · GW

Yeah, I think it probably depends on your specific credence that artificial minds will dominate in the future. I assume that most people don't place a value of 100% on that (especially if they think x-risks are possible prior to the invention of self-replicating digital minds, because necessarily that decreases your credence that artificial minds will dominate). I think if your credence in this claim is relatively low, which seems reasonable, it is really unclear to me that the expected value of working on human-focused x-risks is higher than that of working on animal-focused ones. There hasn't been any attempt that I know of to compare the two, so I can't say this with confidence though. But it is clear that saying "there might be tons of digital minds" isn't a strong enough claim on its own, without specific credences in specific numbers of digital minds.

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-02-04T14:54:53.417Z · EA · GW

That's a good point!

I think something to note is that while I think animal welfare over the long term is important, I didn't really spend much time thinking about possible implications of this conclusion in this piece, as I was mostly focused on the justification. I think that a lot of value could be added if some research went into these kinds of considerations, or alternative implications of a longtermist view of animal welfare.


Thanks!

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-02-04T14:51:39.706Z · EA · GW

Hey,

Yes, this was noted in the sentence following you quote and the paragraphs after this one. Note that if humans implemented extremely resilient interventions, human-focused x-risks might be of less value, but I broadly agree humanity's moral personhood is a good reason to think that x-risks impacting humans are valuable to work on. Reading through my conclusions again, I could have been a bit more clear on this.

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-02-04T04:58:47.633Z · EA · GW

Ah - I meant human, emulated or organic, since Rob referred to emulated humans in his comment. For less morally weighty digital minds, the same questions RE emulating animal minds apply, though the terms ought to be changed.

Also it seems worth noting that much the literature on longtermism, outside Foundation Research Institute, isn’t making claims explicitly about digital minds as the primary holders of future welfare, but just focuses on the future organic human populations (Greaves and MacAskill’s paper, for example), and similar sized populations to the present day human population at that.

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-02-04T03:03:51.731Z · EA · GW

Hey!

Admittedly, I haven't thought about this extensively. I think that there are a variety of x-risks that might cause humans to go extinct but not animals, such as specific bio-risks, etc. And, there are x-risks that might threaten both humans and animals (a big enough asteroid?), which would fall into the group I describe. One might be just continued human development massively decreasing animal populations, if animals have net positive lives, though I think those might be unlikely.

I haven't given enough thought to the second question, but I'd guess if you thought most the value of the future was in animal lives, and not human lives, it should change something? Especially given how focused on only preserving human welfare the long-termist community has been.

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-02-04T02:58:11.297Z · EA · GW

Hey Rob!

I'm not sure that even under the scenario you describe animal welfare doesn't end up dominating human welfare, except under a very specific set of assumptions. In particular, you describe ways for human-esque minds to explode in number (propagating through space as machines or as emulations). Without appropriate efforts to change the way humans perceive animal welfare (wild animal welfare in particular), it seems very possible that 1) humans/machine descendants might manufacture/emulate animal-minds (and since wild animal welfare hasn't been addressed, emulate their suffering), 2) animals will continue to exist and suffer on our own planet for millennia, or 3) taking an idea from Luke Hecht, there may be vastly more wild "animals" suffering already off-Earth - if we think there are human-esque alien minds, than there are probably vastly more alien wild animals. The emulated minds that descend from humans may have to address cosmic wild animal suffering.

All three of these situations mean that even when the total expected welfare of the human population is incredibly large, the total expected welfare (or potential welfare) of animals may also be incredibly large, and it isn’t easy to see in advance that one would clearly outweigh the other (unless animal life (biological and synthetic) is eradicated relatively early in the timeline compared to the propagation of human life, which is an additional assumption).

Regardless, if all situations where humans are bound to the solar system and many where they leave result in animal welfare dominating, then your credence that animal welfare will continue to dominate should necessarily be higher than your credence that humans will leave the solar system. So neglecting animal welfare on the grounds that humans will dominate via space exploration seems to require further information about the relative probabilities of the various situations, multiplied by the relative populations in these situations.

I haven’t attempted any particular expected value calculation, but it doesn’t seem to me like you can conclude immediately that simply because human welfare has the potential to be infinite or extravagantly large, the potential value of working on human welfare is definitely higher. The latter claim instead requires the additional assertion that animal welfare will not also be incredibly or infinitely large, which as I describe above requires further evidence. And, you would also have to account for the fact that wild animal welfare seems vastly more important currently and will be for the near future in that expected value calculation (which I take from your objection being focused on the future, you might already believe?).

If this is your primary objection, at best it seems like it ought to marginally lower your credence that animal welfare will continue to dominate. It strikes me as an extremely narrow possibility among many many possible worlds where animals continue to dominate welfare considerations, and therefore in expectation, we still should think animal welfare will dominate into the future. I’d be interested in what your specific credence is that the situation you outlined will happen?

Comment by abrahamrowe on Optimal population density: trading off the quality and quantity of welfare · 2020-01-23T16:34:31.239Z · EA · GW

This is really amazing, and it'll be interesting to see it applied to wild animal welfare work in the future. I also imagine that there are a lot of applications for farmed animal welfare improvements, etc. Thanks for sharing!

Comment by abrahamrowe on We're Rethink Priorities. AMA. · 2019-12-16T17:58:54.173Z · EA · GW

Thanks for the response! I guess I personally am interested in it, because I think it would lend credibility to WAW outreach projects to be able to cite it.

Comment by abrahamrowe on We're Rethink Priorities. AMA. · 2019-12-16T17:57:05.260Z · EA · GW

That's great to hear! I guess I think it would be great for norms of caring about invertebrates to be spread in the animal advocacy space, so that seems good.

Comment by abrahamrowe on We're Rethink Priorities. AMA. · 2019-12-16T17:56:11.972Z · EA · GW

I don't actually know if engagement, is important (maybe it is an indicator of either your thoroughness, as there are few followups, or just that you all are the experts, so most people on the forum aren't going to weigh in). Sharing with funders makes a lot of sense. Thanks!

Comment by abrahamrowe on We're Rethink Priorities. AMA. · 2019-12-16T17:53:56.401Z · EA · GW

I guess my inclination toward in-house teams would be that an organization would be more likely to respond / change direction on the basis of findings from in-house teams. But I'm unsure that there is much evidence that organizations have changed directions from research done by anyone, except perhaps in small ways. I also imagine being in-house would reduce barriers for data collection, etc., because there wouldn't be NDAs or privacy concerns that might govern inter-org interactions. I think you and I had previously had this issue, where I had done research that might have been relevant to your work, and couldn't share it due to an NDA.

Comment by abrahamrowe on Interaction Effect · 2019-12-16T16:57:20.111Z · EA · GW

I'm not particularly EA, but I think they gist of the argument is - you should work where can you make the most marginal impact, not necessarily in a job that is the highest impact overall. So if you're choosing a career for impact, you might be one of only a few thousand people thinking about things in EA terms. If you want to have a large impact, then you ought to look at things that are large in scope and neglected, etc.

If somehow the EA community coordinated all resources, or was much much larger in size, the recommended careers would probably be different. In that case, obviously some people would need to be teachers, farmers, etc., and it would be important to encourage people capable of doing those things well to pursue those careers. But, given that there are relatively few people willing to change their careers for this sort of impact right now, the careers recommendations that are made in fields where a few people might have a larger impact.

This isn't a denial of interdependence. It's more of an implicit acknowledgement of the limits of the current size of the community.

Another factor is that many careers that EA careers are dependent upon are likely to be filled regardless. There are people who would like to be, or whose circumstances cause them to be, teachers, construction workers, farmers, truck drivers, etc. So while all those jobs probably have a (positive) impact, it's less urgent for someone who wants to have the greatest impact to pursue that as a career. While education might be important, I know that if I don't apply for a job at my local high school, another (even more) capable teacher probably will. Instead, on the margin, an EA might have a greater impact by pursuing something more neglected, or pursuing a career where they can earn money to donate to a charity that can hire more people for a neglected cause, etc.

The core idea is that because there is only a small community of people interested in having the greatest impact they can, then they should pursue careers that on the margin would be most likely to have the greatest impact. It doesn't necessarily mean that these careers are intrinsically or functionally "better" or higher ranked than others. They are prioritized by EA because few people are in EA, and fewer people are thinking about pursuing recommended careers.

Comment by abrahamrowe on We're Rethink Priorities. AMA. · 2019-12-12T20:07:24.909Z · EA · GW

I'd be interested in what organizations you're comparing against? I wonder if it is more that animal advocacy research is funding constrained compared to global poverty or x-risks, and that ends up negatively impacting groups that do research on animal advocacy and other topics.

Comment by abrahamrowe on We're Rethink Priorities. AMA. · 2019-12-12T18:59:01.766Z · EA · GW

How do you engage with the animal welfare advocacy groups who might act on your research? Or alternatively, how do you counteract any negatives from not being an advocacy organization, and not getting feedback directly (e.g. advocacy that responds to research because they are done in conjunction)?

When I worked in animal advocacy, my sense was that the research that EA research groups like ACE were doing was either irrelevant or badly ill-informed / inaccurate, primarily because the researchers didn't actually have much experience in the space. Or, it came only after the advocacy groups had already basically realized the same things, and shifted priorities. I don't think this has really been relevant for the work you've done so far, since it hasn't been particularly proscriptve on particular strategies, but it seems like a greater risk as you do more farmed animal research. I've always been disappointed that the in-house research teams at animal groups are small, since they seem better positioned to do some of this work (though there are probably downsides to that too).

Edit for clarification: As an example, a lot of studies were done on pro-vegan leaflets. Many studies seemed to be badly designed, etc, so that was too bad. But organizations did leafleting for a while, realized there were more effective uses of resources, and then stopped leafleting (generally - obviously some still happens, especially to cultivate volunteers). It was only after this that evidence that leafleting was not very effective emerged in the research literature. While I'm glad that a post-mortem happened, it really didn't make a difference in charity behavior, since charities had changed direction already for the most part.

The question is really just motivated by a thought experiment - if I could, instead of having all the money that's been spent on EA animal advocacy research historically, have that money go to direct advocacy (maybe corporate campaigns, for example), would I? And for me the answer is almost certainly yes, with maybe one or two exceptions.

Relatedly, on wild animal welfare, I feel very confident that if we could eliminate basically all research that happened before ~3 months ago in exchange for the information we have now about how to approach academic field building, it be worthwhile (recognizing that a big chunk of that research is stuff I spent time on).

So both these suggest to me that I should generally have a prior favoring direct advocacy (or at least, really promising direct advocacy) over EA research moving forward, as much as that goes against my own inclinations or desires (I like research more). Or at least, a positive case has to be made for research. And, it suggests to me, given that almost all this research has been done by groups not doing advocacy (with exceptions), that research should primarily be done by groups doing advocacy. Though as a note, obviously a lot of academic field building advocacy on wild animal welfare issues can be done by publishing research within the conservation space, etc

Comment by abrahamrowe on We're Rethink Priorities. AMA. · 2019-12-12T18:51:52.553Z · EA · GW

Given that some of your staff have academic backgrounds, do you all have plans to refine and pursue peer-reviewed publication for your invertebrate welfare related work (though I don't know if it would be well received)? It seems like there could be a lot of value in the pieces being seen by academic audiences, at least from a wild animal welfare academic field building perspective. If not, why not?