[Linkpost] Rethink Priorities is hiring a Research Project and Hiring Manager to accelerate its research 2021-05-03T20:26:03.825Z
[Linkpost] We’re starting an insect charity, and looking for an executive director to run it 2021-05-03T20:23:02.976Z
Silk production: global scale and animal welfare issues 2021-04-20T01:38:53.772Z
abrahamrowe's Shortform 2021-02-24T15:18:45.787Z
The scale of direct human impact on invertebrates 2020-09-02T13:22:47.643Z
Insects raised for food and feed — global scale, practices, and policy 2020-06-29T13:57:31.653Z
Notes on how a recession might impact giving and EA 2020-03-13T18:17:24.865Z
Global cochineal production: scale, welfare concerns, and potential interventions 2020-02-11T21:33:20.225Z
Should Longtermists Mostly Think About Animals? 2020-02-03T14:40:23.242Z
Uncertainty and Wild Animal Welfare 2019-07-19T13:33:51.533Z
A Research Agenda for Establishing Welfare Biology 2019-03-15T18:24:51.099Z
Announcing Wild Animal Initiative 2019-01-25T17:23:30.758Z


Comment by abrahamrowe on Best places to donate? · 2021-05-08T16:54:13.937Z · EA · GW

In these cases, it's likely that you're getting better returns on credit card fees than giving directly to 22 charities, but marginally worse efficiency on processing costs, since it is probably around the same processing  cost for all 22 charities, and also a processing cost at The Life You Can Save, etc.

Based on this, from a pure cost-to-programs view, I'd guess that if it is split up among at least 3 or 4 charities or more, the credit card fee benefits will outweigh the lower efficiency from the processing, so it is probably usually worth giving to something like the GiveWell maximum impact fund or TLYS, or the EA Funds, etc.

Also, I think getting all the benefits  you also get from giving via those funds, like the ones esentorella describes, makes it especially worthwhile to continue giving via those funds (e.g. their research and understanding about how to optimally redistribute the funding).

Comment by abrahamrowe on Best places to donate? · 2021-05-07T13:00:43.285Z · EA · GW

So for us currently, the processing time doesn't change much depending on the frequency of the gift. But I will say that we don't invest a lot in improving this process, because overall we don't put much time into it, so maybe at an organization with a way higher volume of small donations, they would have automated some of what I described. That being said, I've worked at a handful of fairly large nonprofits that are still doing this manually, so it just is going to depend on the organization, and 5min / transaction seems like a safe bet. There are other costs besides just processing - e.g. a bit of time spent on end-of-year donation receipts potentially, etc., that you might counterfactually cause the nonprofit to incur. 

If you're doing an ACH transfer monthly, the costs will vary by provider, but they are a lot cheaper than credit cards. I think Plaid, which is a really common ACH service, costs around 0.80%, capped at $5, or something like that. There is more variation here across providers though, unlike credit cards, where there seem to just be industry standard rates.

I do think there are some genuine advantages to giving monthly, or on a recurring basis, to some organizations. A lot of EA organization keep pretty large amounts of reserves in the bank (at least, compared to similarly sized other nonprofits I've seen, it seems like). But if you're giving to a small organization that doesn't have large reserves, the cashflow certainty that comes with monthly donations might offset to a large degree the extra processing costs, etc.

If I have to guess the best way to give to reduce costs, stress, etc. on an organization, I'd guess it would be a lump sum given in maybe March-May, when the fundraising season is in a lull, but I really think that's probably only very marginally better than giving in whatever way works best for you, monthly, etc., so if monthly helps someone give with less stress, it's probably still worth doing. And if it is a small organization, monthly can really help.

Comment by abrahamrowe on Best places to donate? · 2021-05-07T00:31:35.481Z · EA · GW

I believe the typical nonprofit credit card rate (for Visa and Mastercard) is 2.2%+0.30 USD. So for 5 x $50 donations, it's costing around $1.4*5 = $7 to process your credit card payments across all organizations. For my organization, entering a donation in whatever systems we enter it in probably takes around 5 minutes. I'd guess that with taxes, etc., the average EA nonprofit ops person costs around $40 / hour, so that's another $3.33 per donation = $16.67 across all donations. So of your $250, around $23.67 is going to overhead costs.

If you gave it in one gift, the fees would be $5.80, and you'd only have $3.33 in other ops costs, so $9.13 total.

So basically giving to one organization would increase the amount of your donation going to non-overhead things by probably around $14 / mo, or $168 / year, assuming the organization has other things for the operations staff to do in the 20 minutes they are saving by not processing the extra donations.

Comment by abrahamrowe on Silk production: global scale and animal welfare issues · 2021-04-21T21:11:58.592Z · EA · GW

Hey Michael,

Yeah it's a good point that if you're heavily discounting silkworms, the fact that the population isn't several orders of magnitude larger than chickens might be a reason to not prioritize them, though I am pretty uncertain about, if discounting ought to happen, how we ought to approach (I'm definitely not confident we should just look at neuron counts, but am not very bullish on any other approaches).

I do think that it might be significantly easier to reduce silkworm farming than chicken farming though. For example, I don't know if there are any major example of something similar to the ASOS ban on silk for chickens, like a major retailer going fully vegetarian, or just not selling chicken for animal welfare reasons. It seems plausible that silk bans would be much easier to achieve than these, so it might still be more cost-effective to work on silk. I'm just speculating here though - I don't really have any empirical information, but would definitely support a group trying targeted silk campaigns to compare the ease to current animal welfare campaigns.

Comment by abrahamrowe on Silk production: global scale and animal welfare issues · 2021-04-21T21:08:11.091Z · EA · GW


Comment by abrahamrowe on Silk production: global scale and animal welfare issues · 2021-04-20T15:08:54.239Z · EA · GW

I didn't research it in detail, and mention in the introduction that I basically assume that they are for the purposes of looking into this, and that this assumption shouldn't be taken for granted if one was prioritizing work based on this. I don't really think this is an answerable question right now with currently available information, and I don't know how to discount on the basis on that uncertainty. I personally am fairly sympathetic to treating many kinds of insects as capable of having valenced experiences, on the basis of Rethink Priorities' work in that space (though they didn't look at larval silk specifically), but when I do research in this space, part of the purpose is just purely fact finding. There are a bunch of industries that use billions and trillions of animal, and very little work has been done to study them from an animal welfare lens. At a minimum it seems worth someone spending a few hours considering each of these industries from an animal welfare lens, so I've been doing that.

However, I will note that I think that silk bans seem to be possibly also be net-good for humans - silk production seems to involve a fair amount of human rights abuses, including slavery, child labor, etc., and has been campaigned against by human rights groups extensively (see that historical advocacy section for a bit more detail). It seems that several human rights groups explicitly have worked on securing silk bans. I'm not certain of the scale of these harms, so I am not recommending those campaigns from an EA-perspective (or animal welfare silk campaigns for that matter), but I do think that it's fairly plausible that the benefits of industrial silk do not outweigh the harms to humans.

I think that promoting silk alternatives for its industrial / commercial uses you mention, like the Material Innovation Initiative does, is a pretty promising route to both reduce human and potential animal suffering.

Comment by abrahamrowe on Silk production: global scale and animal welfare issues · 2021-04-20T13:07:38.614Z · EA · GW

Thanks for the question - I didn't look into it in too much detail, but my impression is that India is actually the largest importer of silk (mostly from China), and not a very large exporter, suggesting that there and in China are the largest markets. I believe the EU, Japan, and South Korea are fairly large as well.

I didn't look into interventions directly outside of the speculation on them listed here, but I'd be interested in the reasoning / evidence that bans in areas with low use being more tractable. I assume that this means there is some ideal level of use / cultural relevance that means both maximal impact vs highest tractability.

Comment by abrahamrowe on How we averted 130,000 animal deaths (in expectation) with a volunteer campaign. · 2021-04-05T17:17:44.502Z · EA · GW

This is great! Thanks for the write up!

One other assumption that jumps out to me (represented in your model under "School meals affected per year (190 days)")

If I recall correctly, HSUS in the US originally sought Meatless Monday commitments, but found that many school districts, etc., that committed, didn't actually reduce their purchasing  that much (or at all) - they ended up just making their normal orders for meat, and added some veg stuff on top of that. This likely meant that these districts ended up serving more meat on non-Mondays. So, they changed their ask to a "20% overall reduction in meat purchases". This might mean the effectiveness is unfortunately a bit lower, if this is generally the case (though for HSUS it was for US school districts, so purchasing might work differently in the UK).


I wonder if there are a lot of low-hanging fruit for these campaigns around the world. I imagine there are a fair number of local animal advocacy groups who are really well positioned to do this advocacy, and my impression is that some of these might be really easy to get to change - e.g. a teacher in a district having a few conversations with the right people and bringing them some information.

Comment by abrahamrowe on New Top EA Causes for 2021? · 2021-04-01T15:58:57.313Z · EA · GW

Out of curiosity I stuck an episode into the Wub Machine.  It's genuinely mildly listenable. Also takes no time so the cost-effectiveness here might be high. Original audio: 80,000 Hours.

Comment by abrahamrowe on New Top EA Causes for 2021? · 2021-04-01T13:42:29.982Z · EA · GW

Working title: Reversetermism

Longtermists have pointed out that we've often failed to consider the interests or wellbeing of future beings. But an even more neglected space is the past.

If we think that existential risk is sufficiently high in the near future, there is a good chance that the vast majority of moral value is in the past. Just considering humans, there are at least 300,000 years of experiences, all of which we ought to consider just as important as present day ones. If we consider non-humans' interests, there are billions of years and countless individuals who we ought to expand our moral circle to include.

The scale here is obvious, as is the neglectedness - as far as I am aware, there are no groups focused on ensuring that the past is as good as possible. So, how tractable is it?

Immediately, a handful of interventions come to find:

  • Cultivating expert backcasting:
    • Written history is just a few thousand years old, and unfortunately, a lot of it is incredibly sad. But prior to around 5,500 years ago, we have little data on what human lives were like. By improving our backcasting ability, we can ensure that documentation of these lives in the prehistoric world states they were as good as possible.
  • Making sure there were no existential catastrophes
    • If a x-risk is bad right now, it stands to reason that it might have been even worse had it occurred in the past. We might be able to verify that existential catastrophes did not happen previously, preventing the flourishing of both present day and future humans.

One immediate advantage of reversetermism is that cost-effectiveness can actually be estimated relatively accurately. Here's a simple test:

"On May 5th (Gregorian calendar), 10,560 BC, at 2:00pm Eastern, everything was chill for an hour for everybody."

This expert backcasting took around 12 seconds to produce. Assuming a human population of 2 million, and that you pay expert backcasters $30 USD / hour, this cost $0.10, and created around 228 years of good experiences. With an average lifespan of say 30 years, it costs around $0.013 to save a life. And even more expert backcasters might achieve more efficient results through further work in the field, driving down the cost-effectiveness further.

Comment by abrahamrowe on Insects raised for food and feed — global scale, practices, and policy · 2021-03-31T22:56:38.439Z · EA · GW

An update to this - a study just came out that found black soldier fly larvae replacing up to 30% of fishmeal/fishoil in Siberian Sturgeons is now theoretically more profitable than pure FMFO. Also, there is a new Rabobank report that estimates current prices to be $4-6.5 / kg, dropping to $3/kg by 2030, so it seems like on the fishmeal side, there is a decent chance that 10-15% of diets of at least some fish will be replaced by BSFL or mealworms. (though note Rabobank is a large investor in the space so it's hard to know to motivations behind the projections).

Comment by abrahamrowe on abrahamrowe's Shortform · 2021-02-24T15:18:46.302Z · EA · GW

Following up with some thoughts I originally had in response to saulius' List of ways in which cost-effectiveness estimates can be misleading. Not sure if there has been other write ups of this effect.

If we incentivize charities' to act as cost-effectively as possible, and if they operate in coordination with other groups working on the same issue, it seems like we might expect in many cases what's best for an individual charities' cost-effectiveness to be bad for the overall cost-effectiveness of the space. This issue is compounded if multiple EA / highly cost-effective charities are operating in the same space.

The issue is something like, charities have relative strengths and weaknesses, and by coordinating to take advantage of those, individual charities might lose out on cost-effectiveness, but overall make  their collective work more effective.

I think this occasionally actively happens with animal welfare campaigns, where single donors are giving to several charities doing the same thing.

An example using chicken welfare campaigns in the animal welfare space:

Charity A has 100 good volunteers in City 1, where Company X is headquartered. To run a successful campaign against them would cost Charity A $1000, and Company A uses 10M chickens. Alternatively, Charity A  could run a campaign against Company Y in a different city where they have fewer volunteers for $1500 (more expensive because fewer volunteers).

Charity B has 5 good volunteers in City 1, but thinks they could secure a commitment from Company Y in City 2, where they have more volunteers, for $1000. Company B uses 1M chickens. Or, by spending more money, Charity B could secure a commitment from Company X for $1500.

Charities A and B are coordinating, and agree that Companies X and Y committing will put pressure on a major target (Company Z), and want to figure out how to effectively campaign.

They consider three strategies:

Strategy 1: They both campaign against both targets, at half the cost it would be for them to campaign on their own, and a charity evaluator views the campaign as split evenly between them, since they put in equal effort. The cost-effectiveness of each charity is: (5M + 0.5M Chickens / $500 + $750) = 4,400 chickens / dollar, and $2500 total has been spent.

Strategy 2: Charity A targets Company X, and Charity B targets Company Y. Charity A's cost-effectiveness is 10,000 chickens / dollar, and Charity B's is 1,000 chickens / dollar, with $2,000 total spent.

Strategy 3: Charity A targets Company Y, Charity B targets Company X. Charity A: 667 chickens / dollar, Charity B: 6696 chickens / dollar. $3,000 total spent across all charities.

These charities want to be as effective as possible — clearly, the charities should choose Strategy 2, because the least money will be spent overall (and both charities will spend less for the same outcome).

But if a charity evaluator is fairly influential, and looking at each charity individually, Charity B might push hard for less ideal Strategies 1 or 3, because those make its cost-effectiveness look much better. Strategy 2 is clearly the right choice for Charity B to make, but if they do, an evaluation of their cost-effectiveness will look much worse.

I guess a simple way of putting this is - if multiple charities are working on the same issue, and have different strengths relevant at different times, it seems likely that often they ought to make decisions that might look bad for their own cost-effectiveness ratings, but were the best thing to do / right decision to make.

I can think of a few examples where charities made less effective decisions explicitly due to reasoning about their own cost-effectiveness, and not thinking about coordination, but I'm not sure how prevalent this actually is as an issue. It mainly makes me a little worried about apples-to-apples comparisons of the cost-effectiveness of charities who do the same thing, and are known to coordinate with each other.

Comment by abrahamrowe on AMA: We Work in Operations at EA-aligned organizations. Ask Us Anything. · 2021-02-05T16:28:57.592Z · EA · GW

We do work tests for all roles, operations or not. I think they are far and away the most valuable part of our hiring process.

The ones I have found the most useful for operations hiring:

  • Having people to work through hypothetical complicated financial problems like sending a candidate a list of rules for how transactions should be entered into a spreadsheet, then giving them a list of sample transaction with tons of mistakes and asking them to correct the mistakes, and then giving a list of new entries and seeing if they do it correctly.
  • Asking someone how the would resolve a hypothetical complicated situation involving HR compliance, financial compliance, etc. all bundled together.

The least helpful:

  • Asking people to write something (especially if the role involves communications) - folks interviewing often just don't seem to know your organization well enough to communicate about it well prior to working at the org., and it's really hard to compare these against each other besides on the basis of grammar, etc.
Comment by abrahamrowe on AMA: We Work in Operations at EA-aligned organizations. Ask Us Anything. · 2021-02-05T14:31:07.154Z · EA · GW

I think yes up to a certain point.

  • I'm fairly surprised that organizations had trouble filling operations roles in 2018 or so, as in recent hiring rounds, we've had large numbers of non-EA but otherwise very qualified candidates.
  • I'm a little uncertain of how important it is for operations staff to be highly value-aligned, but I think my view is that it is not that important, especially in lower-level roles. This makes me think the pool of quality candidates is quite a bit larger than it might otherwise seem - there are lots of people with for-profit or non-profit operations experience that is directly relevant to 90% of what I do day-to-day.
  • I think ultimately it is better to have a value-aligned staff member than not, but we had a lot of really great candidates for recent ops roles, and that's probably partially due to us offering competitive compensation and advertising widely, and not just in EA.
Comment by abrahamrowe on Insects raised for food and feed — global scale, practices, and policy · 2020-12-21T21:46:54.275Z · EA · GW

Hi! Thanks for the questions.

On the chitin, I haven't found anything cited that confirms this. A handful of farmers reported this to me, and industry guides often recommend mixing exoskeletons into foods, etc. I think a possibility is that crickets do this for nutrients besides chitin, but that is just the most well known part of exoskeletons, so people mention it.

On breeding: it's going to vary depending on species and intention. If you're growing your colony, you'll need a larger breeding stock, but if you are keeping it the same size, you can use a smaller one. It's not obvious to me how large they are on various farms, and I'm not certain how to approach estimating it. I think some farms likely just pull adults into breeding programs instead of slaughtering them (at least for crickets), while other farms keep separate breeding colonies (e.g. black soldier flies and mealworms are slaughtered as larvae, so some larvae need to be allowed to grow instead of being killed). My guess is that the lives of animals raised to breed would be better than those killed, but I wouldn't put much stake in that. There are some good pictures of BSF breeding facilities and descriptions of the process in Bullock et al but I don't think the source is authoritative.

Comment by abrahamrowe on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T22:45:46.282Z · EA · GW

One thing that is easy to forget is that we are already dramatically intervening in natural ecosystems without paying attention to the impact on animals. E.g. any city, road, mine, etc. is a pretty massive intervention. Or just using any conventionally grown foods probably impacts tons of insects via pesticides. Or contributing to climate change. At a minimum, ensuring those things are done in a way that is kinder way for animals seems like a goal that anyone could be on board with (assuming it is an effective use of charitable money, etc.).

I do also think that most things like you describe are already broadly done without animal welfare in mind. For example, we could probably come up with less harmful deer population management strategies than hunting, and we've already attempted to wipe out species (e.g. screwworms, probably mosquitos at some point in the future).

Comment by abrahamrowe on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T22:39:21.837Z · EA · GW

I think there were a few other philosophy papers that were sort of EA aligned I think, but yeah, basically just those 2. So maybe it was the default by default.

Comment by abrahamrowe on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T22:37:48.547Z · EA · GW

Is there an accessible summary anywhere of the research underlying this shift in viewpoint?

I don't think there has been a summary, but that sounds like a good thing to write. But to quickly summarize things that are probably most informing this:

  1. I'm less confident in negative utilitarianism. I was never that confident in it, and feel much less so now. I don't think this is due to novel research, but just my views changing upon reflecting on my own life. I still broadly probably have an asymmetric view of welfare, but am more sympathetic to weighing positive experiences to some degree (or maybe have just become a moral particularist). I also think if I am less confident in my ethics (which them changing over time indicates I ought to be), then taking reversible actions robust under a variety of moral frameworks that seem plausibly good seems like a better approach.
  2. I feel a lot less confident that I know how long most animals' subjective experiences last, in part due to research like Jason Schukraft on the subjective experience of time. I think the best argument that most animal lives are net-negative is something like "most animals die really young before they accumulate any positive welfare, so the painfulness of their death outweighs basically everything else." This seems less true if their experiences are subjectively longer than they appear. I also have realized that I have a possibly bad intuition that 30 years of good experiences + 10 years of suffering is better than 3 minutes of good experiences and 1 minute of suffering, partially informing this.
  3. I think learning more about specific animals has made me a lot less confident that we can broadly say things like "r-selectors mostly have bad lives." 

Would you say this is a general shift in opinion in the WAW field as a whole?

When I started working in wild animal welfare, basically no one with a bio/ecology background worked in the space. Now many do. Probably many of those people accurately believe that most things we wrote / argued historically were dramatic oversimplifications (because they definitely were). I'm not sure if opinion is shifting, but there is a lot more actual expertise now, and I'd guess that many of those new experts have more accurate views of animals' lives, which I believe ought to incline one to be a least a bit skeptical of some claims made early in the space.

Comment by abrahamrowe on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T21:51:00.529Z · EA · GW

Animal Charity Evaluators is the 6th, which did some surveying and research work in the space. I guess that counts. My phrasing was ambiguous. There have been 6, I co-founded 2 (UF and WAI), worked at another (Rethink Priorities).

Comment by abrahamrowe on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T12:41:07.723Z · EA · GW

I think that Toward Welfare Biology was, until maybe 2016 or so, the default thing people pointed to (along with Brian Tomasik's website), as the introductory text to wild animal welfare. I saw it referenced a lot, especially when I started working in the space.

Comment by abrahamrowe on Why Research into Wild Animal Suffering Concerns me · 2020-10-26T12:37:11.850Z · EA · GW

I co-founded 2 of and have worked at another of the 6 organizations that have worked on wild animal welfare with an EA lens. I've been writing or thinking about these things since around 2014. Here are a handful of thoughts related to this:

  • I think almost none of the people working in the space professionally are full on negative utilitarians. Probably many are very focused on reducing suffering (myself included), but pretty much everyone really likes animals - that's why they work on making their lives better!
  • In 2018, I helped organize the first wild animal welfare summit for the space. We unanimously agreed that this perspective was an unproductive one, and I don't think any group working in the space today (Wild Animal Initiative, Animal Ethics, Rethink Priorities) holds a view that is this strong. So I think in general, the space has been moving away from anything like what you're discussing.
  • Speaking from personal experience, I was much more sympathetic to this sort of view when I first got involved. Wild animal suffering is really overwhelming, especially if you care about animals. For me, it was extremely sad to learn how horrible lives are for many animals (especially those who die young). But, the research I've done and read has both made me a lot less sympathetic to a totalizing view of wild animals of this sort (e.g. I think many more wild animals than I previously thought live good lives), and less sympathetic to taking such a radical action. I think that this problem seems really hard at first, so it's easy to point to an intervention that provides conclusive results. But, research has generally made me think that we are both wrong about how bad many (though definitely not most) animal lives are, and how tractable these problems are. I think there are much more promising avenues for reducing wild animal suffering available.
  • People on the internet talk about reducing populations as being the project of wild animal welfare. My impression is that most or all of those folks don't actually work on wild animal welfare. And the groups working in the space aren't really engaged in the online conversation, probably in part because of disagreement with this view.
  • I hope that there are no negative utilitarians who hold 0 doubts about their ethics. I guess if I were a full negative utilitarian, or something, I probably wouldn't be 100% confident in that belief. And given that irreversibility of the intervention you describe, if I wasn't 100% confident, I'd be really hesitant to do anything like that. Instead, improving welfare is acceptable under a variety of frameworks, including negative utilitarianism, so it seems like we'd probably be inclined to just improve animal's lives.

Overall, I think this concern is pretty unwarranted, though understandable given the online discussion. Everyone I know who works on wild animal welfare cares about animals a lot, and the space has been burdened by these concerns despite them not really referring to views held by folks who lead the space.

Also, one note:

[they will] conclude that the majority of animals on Earth would be better off dead

I think it's pretty important to differentiate between people thinking animals would be better off dead (a view held by no one I know), and thinking that some animals who will live will have better lives if we reduce juvenile mortality via reduced fertility, and through the latter, that we would prevent a lot of very bad, extremely short lives. We already try to non-lethally reduce populations of many wild animals via fertility control (e.g. mosquitos, screwworms, horses, cats). These projects are mainstream (outside of EA), widely accepted as good, and for some of them, done for the explicit benefit of the animals who are impacted. 

Comment by abrahamrowe on EA's abstract moral epistemology · 2020-10-22T13:24:28.458Z · EA · GW

I think it's plausible that some major funders stopped funding some groups  (like farm sanctuaries) in favor of ACE top charities, for example, but I doubt that it has happened with large numbers of smaller donors. But, it's hard to know how much EA is responsible for this. For example, when GFI was founded, I think a lot of people found it to be really compelling, independent of it be promising from an EA lens. While it's a fairly EA-aligned organization, in a world without EA, something like it probably would have been founded anyway, and because it  compelling, lots of donors might have switched from whatever they were donating to before to donating to GFI. My impression is also that a lot of funding that has left charities is going into investing in clean / plant-based meat companies. I also expect that would have happened had EA not existed.

Comment by abrahamrowe on EA's abstract moral epistemology · 2020-10-21T13:13:40.702Z · EA · GW

I volunteered but didn't work in the animal advocacy space prior to EA (starting in maybe 2012 or so), but have worked at EA-aligned animal organizations, and been on the board of non-EA aligned (but I think very effective) animal organizations in recent years. Probably someone who worked more in the space prior to ~2014 or 2015 could speak more to what changed in animal advocacy from EA showing up.

The relevant quote:

The animal policy summit I attended in February permitted time for casual conversation among a variety of activists. These included sanctuary managers, directors of non-profits dedicated to ending factory farming, vegan educators, directors of veganism-oriented, anti-racist public health and food access programs, etc. It also included some academics. As some of the activists were talking, they got on to the topic of how charitable giving on EA’s principles had either deprived them of significant funding, or, through the threat of the loss of funding, pushed them to pursue programs at variance with their missions. There was general agreement that EA was having a damaging influence on animal advocacy.

I think that EA has definitely had some negative impact on animal advocacy, but overall has been very good for the space.

The Good

There is definitely way more funding in the space due to EA, and not less - OpenPhil makes up a massive percentage of overall animal welfare donations, and gives a large amount to groups who aren't purely dedicated to corporate welfare campaigns (though the OpenPhil gift itself might be restricted to welfare campaigns). Mercy For Animals, Animal Equality, etc., receive large gifts from OpenPhil and do vegan education / work to end factory farming, and not just reform it. ACE has probably brought in other EAs who would not have otherwise donated to animal welfare work (I'd guess at least a few million dollars a year). 

I think it is plausible that over the last few years, EA-aligned donors have stopped donating to some non-EA aligned organizations. Animal advocacy charities are generally very top-heavy — a huge percentage of donations are coming from a few people. If a couple of those people change where they are donating, it might significantly impact a charity, especially a smaller one. But, overall I'd guess that this isn't for purely EA reasons — lots of large donors in the space are investing in plant-based meat companies, for example, and might have chosen to do that independently of EA.

Also, EA has really opened up what I believe are the most promising avenues for future animal advocacy - addressing wild animal welfare (in a species-neutral way) and addressing invertebrate welfare. I think both areas would basically be impossible to fund in the short-term if EA funding wasn't available.

The Bad

I think the compelling critique of how EA has negatively impacted animal advocacy is something similar to the institutional critique the author presents. For example, at least early on, the focus on corporate campaigns meant that activities like community building were relatively neglected. I feel uncertain about the long-term impact of this, but I'd wager that most EAA organizations in the US, for example, have a lot more trouble getting volunteers to events than they did maybe 7-10 years ago or so. I think it's plausible that there are similar programmatic shifts away from activities that didn't have obvious impact that will harm the effectiveness of organizations down the line. Also, as the author says, this sort of critique could be viewed as an internal critique of activities, as opposed to a critique of EA as a whole.

There are probably some highly effective animal advocacy organizations totally neglected by EA (at least compared to ACE top charities). I also think that an GiveWell-style apples-to-apples comparison of different charities doing a similar and related activity doesn't necessarily make sense for, say, organizations doing corporate campaigns, since the organizations are highly coordinated. But again, this seems like an internal critique.

I see ending factory farming / vegan advocacy as likely deeply aligned with EA. I think that the animal advocacy space really struggled to make progress on these issues over the past few decades, but has made more progress in the last 5 years. I don't know if this is due to plant-based meats becoming more popular, EA showing up, or something else, but broadly, we're doing better now than we were before, I think, at helping animals.

The "remark on institutional culture" is a pretty good critique of EA, though I don't know what to conclude from it. But, if the essay is focused on EAA specifically, I think that comment is a lot less relevant, as I'd guess as a whole, EAA is much more open to social justice / non-EA ethics, etc. than some other communities in EA.

Overall, most this critique just seems to be that the author just disagrees with many people in EA about ethics and metaethics.

Comment by abrahamrowe on When does it make sense to support/oppose political candidates on EA grounds? · 2020-10-16T14:32:20.355Z · EA · GW

I really appreciated this post! Thanks for writing it. I also really appreciated the original post and am a bit bummed it got buried. I also want to note that I find it odd that post got downvoted (possibly for being explicitly partisan?) vs posts like this, which don't explicitly claim to be partisan / engaging in politics but I think are actually extremely political.

One thought, slightly unrelated to the question of whether or not there are good EA grounds for supporting / opposing political candidates (and I think it's highly likely that there are):

Effective altruism has long had a culture of shying away from explicit engagement in partisan politics

I think one really useful and accurate idea from the social justice community is the idea that you can't be neutral on many political issues. This seems like it ought to be even more compelling from a consequentialist perspective as well, as inaction on certain political opportunities (not exclusively, but definitely including removing Trump from office / Joe Biden winning the 2020 election in the US) might contribute directly to the worse outcome. The status quo is already a manifestation of political positions, so if you're not engaging in changing the status quo, you are taking whatever political positions built it.

For example, I live in Pennsylvania, and theoretically my vote might matter in the US presidential election this year. I can vote for Joe Biden, not vote (or vote for a third party), or vote for Donald Trump. I think it seems clear that the downside risk from Trump winning is very high compared to Joe Biden, and given that Trump will win if Joe Biden doesn't, there is almost as much risk in not voting. I think that I pretty clearly on (some kind of rough near-termist) consequentialist grounds should vote for Joe Biden, and probably should try to get as many people as possible to do the same.

I think there are probably lots of good reasons to think that dollars directed by the EA community shouldn't go to political candidates as a general rule of thumb (though there are probably really good giving opportunities at times), but broadly, as a community interested in ethics, it seems like we are inherently taking fairly strong political positions, but then not really willing to discuss them or make them explicit .

This was a bit of a ramble because my thoughts aren't well-formed, but I think it is pretty likely that attempting to be "neutral" on political issues is close to being as bad as taking the political position that will lead to the worse outcome, or something along those lines.

Comment by abrahamrowe on EricHerboso's Shortform · 2020-09-09T02:46:59.049Z · EA · GW

Thanks for sharing your thoughts! I guess part of the reason I feel more strongly that this kind of comment ought not to be upvoted is that EricHerboso seemed to bring up the Facebook thread not to open a debate on its content, but to point out that the behavior of some of the Facebook commentors harmed EAs or EA adjacent organizations through putting an emotional toll on people, and that this kind of behavior is explicitly costing EA. That seems like a really important thing to discuss - regardless of what you think of the content of the thread, the content EricHerboso refers to in it negatively impacted the movement.

Dale's comment feels unnecessarily trollish, but also tries to turn the thread into a conversation about what I see as an unrelated topic (the rules of conduct in a random animal rights Facebook group). It vaguely tries to tie back to the post, but mostly this seems like a weak disguise for trolling EricHerboso.

Comment by abrahamrowe on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-09T02:31:56.248Z · EA · GW

Thanks for elaborating!

I think that it seems like accusations of EA associations with white supremacy of various sorts come up enough to be pretty concerning. 

I also think the claims would be equally concerning if JoshYou had said "white supremacists" or "really racist people" instead of "white nationalists" in the original post, so I feel uncertain that Buck stepping back the original post actually lessens the degree we ought to be concerned?

I also have other issues with the rest of the comment (namely being constantly worried about communists or nazis hiding everywhere, and generally bringing up nazi comparisons in these discussions, tends to reliably derail things and make it harder to discuss these things well, since there are few conversational moves as mindkilling as accusing the other side to be nazis or communists. It's not that there are never nazis or communists, but if you want to have a good conversation, it's better to avoid nazi or communist comparisons until you really have no other choice, or you can really really commit to handling the topic in an open-minded way.)

I didn't really see the Nazi comparisons (I guess saying white nationalist is sort of one, but I personally associate white nationalism as a phrase much more with individuals in the US than Nazis, though that may be biased by being American).

I guess broadly a trend I feel like I've seen lately is occasionally people writing about witnessing racism in the EA community, and having what seem like really genuine concerns, and then those basically not being discussed (at least on the EA Forum) or being framed as shutting down conversation.

Comment by abrahamrowe on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-09T00:13:32.954Z · EA · GW

I just upvoted this comment as I strongly agree with it, but also, it had  -1 karma with 2 votes on it when I did so. I think it would be extremely helpful for folks who disagree with this, or otherwise want to downvote it, to talk about why they disagree or downvoted it.

Comment by abrahamrowe on The scale of direct human impact on invertebrates · 2020-09-07T18:52:48.582Z · EA · GW

Could you say more about the epistemic status of agricultural pesticides as the largest item in this category, e.g. what chance that in 3 years you would say another item (maybe missing from this list) is larger?

(Probabilities are ballpark guesses, not rigorous)

Just in terms of insects impacted because trying to estimate nematodes or other microscopic animals gets really tricky:

Today: >99% agricultural pesticides are the largest direct cause of insect mortality

3 years: >98% agricultural pesticides are the largest direct cause of insect mortality

20 years: >95% likely agricultural pesticides are the largest direct cause of insect mortality

The one possible category I could imagine overtaking agricultural pesticides are insects raised for animal feed. I think it is fairly unlikely farming insects for human food will grow substantially, but much more likely that insects raised for poultry feed will grow in number a lot, and even more likely that insects raised for fish feed will grow a lot. There is a lot of venture capital going into raising insects for animal feed right now, so it seems at least somewhat likely some of those projects will take off (though there are cost hurdles they haven't cleared yet compared to other animal feeds. Replacing fishmeal with insects seems even more likely because fishmeal is already a lot more expensive than grain feed. 

Replacing ~40% of fishmeal with black soldier flies would put insect deaths from farming at the lower end of my current estimate for the scale of impact from agricultural pesticides. So I guess if estimates of agricultural pesticide impact are too high for an unknown reason (maybe insect populations collapse in the near future or something), there is a definite possibility, but not a big one, that insect farming could overtake pesticides in terms of deaths caused.

And what ratio do you see between agricultural pesticides and other issues you excluded from the category (like climate change and partially naturogenic outcomes)?

I am very uncertain about this. Brian Tomasik estimates the global terrestrial arthropod population to be 10^17 to 10^19 individuals, which would be 10 to 100,000 times the animals impacted by pesticides. Plausibly basically all of them could be impacted by climate change, but it's hard to know whether or not the sign of those impacts will be negative. I imagine that most the impact from climate change, for example, would come from populations shifting - e.g. there suddenly are far fewer of an animals with survival strategy X, and a lot more of insects with survival strategy Y, and that change leads to a lot more positive or negative welfare. That being said, I think we possibly should expect ecosystems changing rapidly to be on average bad for the animals who live through that change or are born after it, at least in the short term.

I think one other excluded area I excluded that could be huge is nematodes and other microscopic invertebrates. There are obviously questions that ought to be raised about their likelihood of having valenced experiences, but as of writing I can purchase 250 million nematodes for biological control on Amazon for $135 USD. Nematodes are widely used in agriculture, and some agricultural pesticides possibly impact nematodes, implying that they'd possibly kill wild nematodes. So it seems like there is some possibility that nematodes impacted by agricultural pesticides outweigh insects impacted by them.

Comment by abrahamrowe on EricHerboso's Shortform · 2020-09-04T14:59:17.888Z · EA · GW

I downvoted this because it seems pretty clear that the author was referencing other aspects of the Facebook thread, and this felt belittling instead of engaging with the author's overall post.

Comment by abrahamrowe on Insects raised for food and feed — global scale, practices, and policy · 2020-07-22T18:18:40.150Z · EA · GW

Thanks for the comment,

I think I agree with everything you're saying here, and that makes sense on how conversion efficiency would work for insectmeal vs animal feed.

A few points:

  • It is definitely unclear if insectmeal will be cost-competitive with either fishmeal or grain feed. I think insectmeal as an alternative to fishmeal has a lot more potential for a variety of reasons - I saw a pitch deck to an investor where a company said it was targeting 1 to 1.5 Euro / kg dry weight for black soldier fly larvae fed on animal waste once they scaled up (though it was a pitch deck, so probably optimistic). If producers can actually hit that target, then it seems plausible some fishmeal could be replaced.
  • I think there is some reason to believe that fisheries, etc., would be actually less willing to pay for insectmeal than fishmeal, since it is new, etc., so the price could need to be even lower than that of fishmeal for insectmeal to take off.
  • There is a large amount of venture capital going into large scale insect farms right now. It's possible that could end up subsidizing the cost of insectmeal in the short-term, and drive it down significantly, only for it later to increase if this source of funding goes away.
Comment by abrahamrowe on Concern, and hope · 2020-07-16T19:00:27.409Z · EA · GW

Thanks for making this post Will -

I'll admit that since the SSC stuff happened, I've been feeling a lot further from EA (not necessarily the core EA ideas, but associating with the community or labeling myself as an EA), and I felt genuinely a bit scared learning through the SSC stuff about ways in which the EA community overlaps with alt-right communities and ideas, etc. I don't know what to make of all of it, as everyone I work with in EA regularly are wonderful people who care deeply about making the world better. But I feel wary and nervous about all this, and I've also been considering leaving the forum / FB groups just to have some space to process what my relationship with EA ought to be external to my work.

I see a ton of overlap between EA in concept and social justice. A lot of the dialogue in the social justice community focuses on people reflecting on their biases, and working to shift out of a lens on the world that introduces some kinds of biases. And, broadly folks working on social justice issues are trying to make the world better. This all feels very aligned with EA approaches, even if the social justice community is working on different issues, and are focused on different kinds of biases.

I've heard (though don't know much about it), that to some extent EA outreach organizations stopped focusing on growth and has focused more on quality in some sense a few years ago. I wonder if doing that has locked in whatever norms were present in the community prior to that, and that's ended up unintentionally resulting in a fair amount of animosity toward ideas or approaches to argument that are outside the community's standards of acceptability? I generally think that one of the best ways to improve this issue is to invest heavily in broadening the community, and part of that might require work to make the community more welcoming (and not actively threatening) to people who might not feel welcome here right now.

Comment by abrahamrowe on X-risks to all life v. to humans · 2020-06-03T20:41:48.453Z · EA · GW

Nope - fixed. Thanks for pointing that out.

Comment by abrahamrowe on X-risks to all life v. to humans · 2020-06-03T20:01:30.221Z · EA · GW

Thanks for sharing this!

I happen to have made a not-very-good model a month or so ago to try to get a sense of how much the possibility of future species that care about x-risks impacts x-risk today. It's here, and it has a bunch of issues (like assuming that it will take the same amount of time from now for a new species that it took for humans to evolve since the first neuron, assuming that all of Ord's x-risks don't reduce the possibility of future moral agents evolving etc.), and possibly doesn't even get at the important things mentioned in this post.

But based on the relatively bad assumptions in it, it spat out that if we generally expect moral agents to evolve who reach Ord's 16% 100 year x-risk every 500 million years or so (assuming an existential event happens), and that most the value of the future is beyond the next 0.8 to 1.2B years, then we ought to adjust Ord's figure down to 9.8% to 12%.

I don't think either the figure / approach in that should be taken at all seriously though, as I spent only a couple minutes on it and didn't think at all about better ways to try to do this - just writing this explanation of it has shown me a lot of ways in which it is bad. It just seemed relevant to this post and I wasn't going to do anything else with it :).

Comment by abrahamrowe on Wild Animal Welfare Meetup (Spring 2020) · 2020-04-26T17:31:20.261Z · EA · GW

Yeah, it's interesting to see that across the board. My sense is that wild animal welfare work (and farmed animal work), are very much funding constrained. Relevant to this - Open Philanthropy doesn't currently fund EA wild animal welfare work.

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-04-14T13:37:48.885Z · EA · GW

Thanks for this. I think for me the major lessons from comments / conversations here is that many longtermists have much stronger beliefs in the possibility of future digital minds than I thought, and I definitely see how that belief could lead one to think that future digital minds are of overwhelming importance. However, I do think that for utilitarian longtermists, animal considerations might dominate in possible futures where digital minds don't happen or spread massively, so to some extent one's credence in my argument / concern for future animals ought to be defined by how much you believe in or disbelieve in the possibility and importance of future digital minds.

As someone who is not particularly familiar with longtermist literature, outside a pretty light review done for this piece, and a general sense of this topic from having spent time in the EA community, I'd say I did not really have the impression that the longtermist community was concerned with future digital minds (outside EA Foundation, etc). Though that just may have been bad luck.

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-04-14T13:26:54.410Z · EA · GW

Ah - you're totally right - that was an oversight. I'm working on a followup to this piece focusing more on what animal focused longtermism looks like, and talk about moral circle expansion, so I don't know how I dropped it here :).

Comment by abrahamrowe on Why I'm Not Vegan · 2020-04-10T19:28:58.265Z · EA · GW

I appreciate your thoughtful response to my post, and think I unintentionally came across harshly. I think you and I likely disagree on how much to weight the moral worth of animals, and what that entails about what we ought to do. But my discomfort with this post is (I hope, though of course I have subconscious biases) is specifically with the non-clarified statements about comparative moral worth between humans and other species. I made my comment to clarify that the reason I voted this down is that I think it is a very bad community standard to blanket accept statements of the sort "I think that these folk X are worth less than these other folk Y" (not a direct quote from you obviously) without stating precisely why one believes that or justifying that claim. That genuinely feels like a dangerous precedent to have, and without context, ought to be viewed with a lot of skepticism. Likewise, if I made an argument where I assumed but did not defend the claim that people different than me are worth 1/10th people like me, you likely ought to downvote it, regardless of the value of the model I might be presenting for thinking about an issue.

One small side note - I feel confused about why the surveys of how the general public view animals are being cited as evidence in favor of casual estimations of animals' moral worth in these discussions. Most members of the public, myself included, aren't experts in either moral philosophy nor animal sentience. And, we also know that most members of the public don't view veganism as worthwhile to do. Using this data as evidence that animals have less moral worth strikes me as doing something analogous to saying "most people who care more about their families than others, when surveyed, seem to believe that people outside their families are worth less morally. On those grounds, I ought to think that people outside my family are worth less morally". This kind of survey provides information on what people think about animals, but in no way is evidence of the moral status of animals. But, this might be the moral realist in me, and/or an inclination toward believing that moral value is something individuals have, and not something assigned to them by others :).

Comment by abrahamrowe on Why I'm Not Vegan · 2020-04-10T13:17:12.945Z · EA · GW

While you're right that the Cambridge Declaration on Consciousness was signed by few people, they were mostly very prominent and influential researchers, which was the point of the thing. But yeah, it is weak evidence on its own, I agree.

I don't know of specific survey data, but based on both the declaration and its continued influence, and the wide variety of opinions, literature reviews, etc supporting the position, my impression is that there is somewhat of a consensus, though there are occasional outliers. I believe my "to some extent, consensus" accurately captures the state of the field. Though in either case it is beside the point since Jeff assumed them to be sentient for the post. Thanks for sharing! :)

Comment by abrahamrowe on Why I'm Not Vegan · 2020-04-09T18:18:44.190Z · EA · GW

I agree that I was assuming a certain moral framework in my post - I've updated it to refer explicitly to utilitarianism of some kind, since that's a fairly common view in EA.

Thanks for the moral trade idea!

Comment by abrahamrowe on Why I'm Not Vegan · 2020-04-09T16:46:49.880Z · EA · GW

Yeah, that's fair - I was not charitable in my original comment RE whether or not there is a rationale behind those estimates, when perhaps I ought to assume there is one. But I guess part of my point is that because this argument entirely hinges on a rationale, not providing it just makes this seem very sketchy.

While I don't think human experiences and animal experiences are comparable in this direct a way, as an illustration imagine me making a post that said, "I think humans in other countries are worth 1/10 of those in my own country, therefore it seems like more of a priority to help those in my own country", and providing no reasoning or clarification for that discount. You would be justified in being very skeptical of the argument I was making, and to view my argument as low quality, even though there might be a variety of other good reasons to prioritize helping those in my own country. I don't think that kind of statement is high enough quality on its own to be entertained or to support an argument. But at its core, that's the argument in this post. I'd be interested in talking about the reasons behind those discounts, but without them, there just isn't even a way to engage with this argument that I think is productive.

For the record, I generally don't think it is a major wrong to not be vegan, and wouldn't downvote / be this critical of someone voicing something along the lines of "I really like how meat tastes, so am not vegan," etc. I am more critical here because it is an attempt to make a moral justification of not eating a vegan diet, and I think that argument not only fails, but also doesn't attempt to defend or explain core premises and assumptions, especially when aspects of those premises seem contrary to some degree of scientific evidence / consensus, which strike me to broadly be taken seriously as part of the community norms.

That being said, I think it's fully possible there are good justifications for having such large discounts on the moral worth of animals, and those discounts are worth discussing. But that was glossed over here, which is why I am responding more critically.

Comment by abrahamrowe on Why I'm Not Vegan · 2020-04-09T14:40:05.481Z · EA · GW

I downvoted this, and would feel strange not talking about why:

I think there are lots of good reasons, moral or otherwise, to not be vegan - maybe you can't afford vegan food, or otherwise cannot access it. Maybe you've never heard of veganism. Maybe there are good reasons to think that the animal products you're eating aren't causing additional harm. Maybe you just like animal products a lot, and want to eat some, even though you know it is bad.

But I don't think this argument is a particularly good one, and doesn't engage with questions of animal ethics well:

1. "I think there's a very large chance they don't matter at all, and that there's just no one inside to suffer" - this strikes me (for birds and mammals at least) as a statement in direct conflict with a large body of scientific evidence, and to some extent, consensus views among neuroscientists (e.g. the Cambridge Declaration on Consciousness Though to be fair, you are assuming they do feel pain in this post.

2. Your weights for animals lives seem fairly arbitrary. I agree that if those were good weights to use, maybe the moral trade-offs would be justified, but if you're just saying, with little basis, that a pig has 1/100 human moral worth, I don't know how to evaluate it. It isn't an argument. It's just an arbitrary discount to make your actions feel justified from a utilitarian standpoint.

I also think these moral worth statements need more clarification - do you mean that while I (a human) feel things on the scale of -1000 to 1000, a pig only feels things on the scale of -10 to 10? Or do you mean a pig is somehow worth less intrinsically, even though it feels similar amounts of pain as me? The first statement I am skeptical of because of a lack of evidence for it, and the second seems just unjustifiably biased against pigs for no particular reason.

I generally think factory farms are pretty bad, and maybe as bad as torture. Removing cows from the equation, eating animal products requires 6.125 beings to be tortured per year per American (by the numbers you shared). I personally don't think that is a worthwhile thing to cause. Randomly assigning small moral weights to those animals to feel justified seems unscientific and odd.

I think it seems fairly clear that there is a strong case to be made, if you're someone who has the means and access to vegan food and are a utilitarian of various sorts, to eat at least a mostly vegan diet. No one has to be perfectly moral all the time, and I think it's probably okay (on average) to often not be perfectly moral. But presenting arbitrarily assigned discounts on lives until your actions are morally justified is a weak justification.

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-03-31T18:18:09.914Z · EA · GW

Thanks for linking!

Yeah, that's interesting. Clearly there is major decline in some populations right now, especially large vertebrates and birds. I guess the relevant questions are: will those last a long time (at least a few hundred years), and: is there complementary growth in other populations (invertebrates)? Especially if species that are succeeding are smaller on average than the ones declining, as you might expect there to be even more animals then. Cephalopod populations, for example, have increased since the 50s:

Comment by abrahamrowe on Estimates of global captive vertebrate numbers · 2020-02-18T18:55:37.426Z · EA · GW

This really awesome and helpful! Thanks Saulius!

One group that is probably pretty small but isn't listed here - animals in wildlife rehabilitation clinics: this page says 8k to 9k animals (I'm guessing mostly vertebrates?) enter clinics in Minnesota every year. If that scales by land area for the contiguous United States, that would be 270k - 305k animals per year in the US, so maybe a few million globally? But that's just a guess from the first good source I saw.

On pet shelters - I used to work at one, and every month, we reported our current animal population (along with a lot of other stats), to this organization - - I think their data could probably be used to get a very accurate estimate of animals currently in shelters in the US.

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-02-10T01:04:47.677Z · EA · GW

Yeah I think that is right that it is a conservative scenario - my point was more, the proposed future scenarios don't come close to imagining as much welfare / mind-stuff as might exist right now.

Hmm, I think my point might be something slightly different - more to pose a challenge to explore how taking animal welfare seriously might change the outcomes of conclusions about the long term future. Right now, there seems to be almost no consideration. I guess I think it is likely that many longtermists thinks animals matter morally already (given the popularity of such a view in EA). But I take your point that for general longtermist outreach, this might be a less appealing discussion topic.

Thanks for the thoughts Brian!

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-02-10T00:52:36.926Z · EA · GW

Yeah, the idea of looking into longtermism for nonutilitarians is interesting to me. Thanks for the suggestion!

I think regardless, this helped clarify a lot of things for me about particular beliefs longtermists might hold (to various degrees). Thanks!

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-02-10T00:50:07.324Z · EA · GW

That makes sense!

Comment by abrahamrowe on EA Animal Welfare Fund is looking for applications until the 6th of February · 2020-02-06T21:05:59.542Z · EA · GW


Comment by abrahamrowe on EA Animal Welfare Fund is looking for applications until the 6th of February · 2020-02-05T22:21:23.035Z · EA · GW

Hey Karolina,

Is the deadline at a specific time on February 6th, or before the 6th (i.e. EOD the 5th)? The wording is just slightly vague.

Thanks for all you do!

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-02-05T15:45:42.309Z · EA · GW

Thanks for the feedback - that's a good rule of thumb!

Comment by abrahamrowe on Should Longtermists Mostly Think About Animals? · 2020-02-05T15:43:40.974Z · EA · GW

Thanks for laying out this response! It was really interesting, and I think probably a good reason to not take animals as seriously as I suggest you ought to, if you hold these beliefs.

I think something interesting is that this, and the other objections that have been presented to my piece have brought out is that to avoid focusing exclusively on animals in longtermist projects, you have to have some level of faith in these science-fiction scenarios happening. I don't necessarily think that is a bad thing, but it isn't something that's been made explicit in past discussions of long-termism (at least, in the academic literature), and perhaps ought to be explicit?

A few comments on your two arguments:

Claim: Our descendants may wish to optimize for positive moral goods.
I think this is a precondition for EAs and do-gooders in general "winning", so I almost treat the possibility of this as a tautology.

This isn't usually assumed in the longtermist literature. It seems more like the argument is made on the basis of future human lives being net-positive, and therefore good that there will be many of them. I think the expected value of your argument (A) hinges on this claim, so it seems like accepting it as a tautology, or something similar, is actually really risky. If you think this is basically 100% likely to be true, of course your conclusion might be true. But if you don't, it seems plausible that that, like you mention, priority possibly ought to be on s-risks.

In general, a way to summarize this argument, and others given here could be something like, "there is a non-zero chance that we can make loads and loads of digital welfare in the future (more than exists now), so we should focus on reducing existential risk in order to ensure that future can happen". This raises a question - when will that claim not be true / the argument you're making not be relevant? It seems plausible that this kind of argument is a justification to work on existential risk reduction until basically the end of the universe (unless we somehow solve it with 100% certainty, etc.), because we might always assume future people will be better at producing welfare than us.

I assume people have discussed the above, and I'm not well read in the area, but it strikes me as odd that the primary justification in these sci-fi scenarios for working on the future is just a claim that can always be made, instead of working directly on making lives with good welfare (but maybe this is a consideration with longtermism in general, and not just this argument).

I guess part of the issue here is you could have an incredibly tiny credence in a very specific number of things being true (the present being at the hinge of history, various things about future sci-fi scenarios), and having those credences would always justify deferral of action.

I'm not totally sure what to make of this, but I do think it gives me pause. But, I admit I haven't really thought about any of the above much, and don't read in this area at all.

Thanks again for the response!