EA's abstract moral epistemology

post by Stijn · 2020-10-20T14:11:08.100Z · EA · GW · 14 comments

This is a question post.

Does anyone understand this criticism? https://www.oxfordpublicphilosophy.com/blog/letter-to-a-young-philosopher-dont-become-an-effective-altruiststrong-strong? For me it sounds way too abstract, although ironically that article criticizes the 'abstract' moral epistemology of EA. Curious about your thoughts.

Answers

answer by Matt_Lerner · 2020-10-20T15:00:42.382Z · EA(p) · GW(p)

I also found this (ironically) abstract. There are more than enough philosophers on this board to translate this for us, but I think it might be useful to give it a shot and let somebody smarter correct the misinterpretations.

The author suggests that the "radical" part of EA is the idea that we are just as obligated to help a child drowning in a faraway pond as in a nearby one:

The morally radical suggestion is that our ability to act so as to produce value anywhere places the same moral demands on us as does our ability to produce value in our immediate practical circumstances

She notes that what she sees as the EA moral view excludes "virtue-oriented" or subjective moral positions, and lists several views (e.g. "Kantian constructivist") that are restricted if one takes what she sees as the EA moral view. She maintains that such views, which (apparently) have a long history at Oxford, have a lot to offer in the way of critique of EA.

Institutional critique

In a nutshell, EA focuses too much on what it can measure, and what it can measure are incrementalist approaches that ignores the "structural, political roots of global misery." The author says that the EA responses to this criticism (that even efforts at systemic change can be evaluated and judged effective) are fair. She says that these responses constitute a claim that the institutional critique is a criticism of how closely EA hews to its tenets, rather than of the tenets themselves. She disagrees with this claim.

Philosophical critique

This critique holds that EAs basically misunderstand what morality is-- that the point of view of the universe is not really possible. The author argues that attempting to take this perspective actively "deprives us of the very resources we need to recognise what matters morally"-- in other words, taking the abstract view eliminates moral information from our reasoning.

The author lists some of the features of the worldview underpinning the philosophical critique. Acting rightly includes:

acting in ways that are reflective of virtues such as benevolence, which aims at the well-being of others

 

acting, when appropriate, in ways reflective of the broad virtue of justice, which aims at an end—giving people what they are owed—that can conflict with the end of benevolence

She concludes:

In a case in which it is not right to improve others’ well-being, it makes no sense to say that we produce a worse result. To say this would be to pervert our grasp of the matter by importing into it an alien conception of morality ... There is here simply no room for EA-style talk of “most good.”

So in this view there are situations in which morality is more expansive than the improvement of others' well-being, and taking the abstract view eliminates these possibilities.

The philosophical-institutional critique

The author combines the philosophical and institutional critiques. The crux of this view seems to be that large-scale social problems have an ethical valence, and that it's basically impossible to understand or begin to rectify them if you take the abstract (god's eye) view, which eliminates some of this useful information:

Social phenomena are taken to be irreducibly ethical and such that we require particular modes of affective response to see them clearly ... Against this backdrop, EA’s abstract epistemological stance seems to veer toward removing entirely it from the business of social understanding.

This critique maintains that it's the methodological tools of EA ("economic modes of reasoning") that block understanding, and articulates part of the worldview behind this critique:

Underlying this charge is a very particular diagnosis of our social condition. The thought is that the great social malaise of our time is the circumstance, sometimes taken as the mark of neoliberalism, that economic modes of reasoning have overreached so that things once rightly valued in a manner immune to the logic of exchange have been instrumentalised.

In other words, the overreach of economic thinking into moral philosophy is a kind of contamination that blinds EA to important moral concerns.

Conclusion

Finally, the author contends that EA's framework constrains "available moral and political outlooks," and ties this to the lack of diversity within the movement. By excluding more subjective strains of moral theory, EA excludes the individuals who "find in these traditions the things they most need to say." In order for EA to make room for these individuals, it would need to expand its view of morality.

answer by kbog · 2020-10-22T10:53:20.473Z · EA(p) · GW(p)

The idea that she and some other nonconsequentialist philosophers have is that if you care less about faraway people's preferences and welfare, and care more about stuff like moral intuitions, "critical race theory" and "Marxian social theory" (her words), then it's less abstract. But as you can see here, they're still doing complicated ivory tower philosophy that ordinary people do not pick up. So it's a rather particular definition of the term 'abstract'. 

Let's be clear: you do not have to have abstract moral epistemology to be an EA. You can ignore theoretical utilitarianism, and ignore all the abstract moral epistemology in that letter, and just commit yourself to making the world better through a basic common-sense understanding of effectiveness and the collective good, and that can be EA. If anyone's going to do philosophical gatekeeping for who can or can't count as an EA, it'll be EAs, not a philosopher who doesn't even understand the movement.

answer by FCCC · 2020-10-22T06:18:37.099Z · EA(p) · GW(p)

My idea of EA's essential beliefs are:

  • Some possible timelines are much better than others
  • What "feels" like the best action often won't result in anything close to the best possible timeline
  • In such situations, it's better to disregard our feelings and go with the actions that get us closer to the best timeline.

This doesn't commit you to a particular moral philosophy. You can rank timelines by whatever aspects you want: Your moral rule can tell you to only consider your own actions, and disregard their effects on the behaviour of other people's actions. I could consider such a person to be an effective altruist, even though they'd be a non-consequentialist. While I think it's fair to say that, after the above beliefs, consequentialism is fairly core to EA, I think the whole EA community could switch away from consequentialism without having to rebrand itself.

The critique targets effective altruists’ tendency to focus on single actions and their proximate consequences and, more specifically, to focus on simple interventions that reduce suffering in the short term.

But she also says EA has a "god’s eye moral epistemology". This seems contradictory. Even if we suppose that most EAs focus on proximate consequences, that's not a fundamental failing of the philosophy, it's a failed application of it. If many fail to accurately implement the philosophy, it doesn't imply the philosophy bad[1]: There's a difference between a "criterion of right" and a "decision procedure". Many EAs are longtermists who essentially use entire timelines as the unit of moral analysis. This is clearly is not focused on "proximate consequences". That's more the domain of non-consequentialists (e.g. "Are my actions directly harming anyone?").

The article's an incoherent mess, even ignoring the Communist nonsense at the end.


  1. This is in contrast with a policies being bad because no one can implement them with the desired consequences. ↩︎

answer by Stijn · 2020-10-23T08:17:51.385Z · EA(p) · GW(p)

Thanks for the answers, really appreciate it

answer by AlasdairGives · 2020-10-20T14:53:07.688Z · EA(p) · GW(p)

She sums up her critique in three points 

The institutional critique alleges that EA disregards the kinds of systematic actions needed to affect social change, 

The philosophical critique alleges that EA’s god’s eye moral epistemology wrongly restricts its view of what values are like.

The core of the composite critique is the idea that social phenomena are ontologically distinctive and that distinctive methods are required to bring them into focus.

I don't think there is any merit to these claims, but unpicking the second and third critique basically depends on unpicking a huge amount of nonsense so i'll leave it for someone else. But anyway I think those summaries in her own words unpick the core claims. 

14 comments

Comments sorted by top scores.

comment by Larks · 2020-10-20T16:19:42.443Z · EA(p) · GW(p)

Yeah despite having studied philosophy I also found this a little impenetrable. It keeps saying things like,

values are simultaneously woven into the fabric of reality and such that we require particular sensitivities to recognise them

and that these views came from some women philosophers at Oxford and Durham, but never really explaining what they mean.

To the extent I felt I understood it, this was only by pattern-matching to the usual criticisms of EA and utilitarianism, like 'too impersonal' and 'not left wing enough'. But this means I wasn't able to get much new from it.

comment by Linch · 2020-10-21T08:50:46.630Z · EA(p) · GW(p)

If I'm reading the intro correctly, it seems like while the title and framing of the object-level arguments are about abstract philosophical/institutional issues with effective altruism writ large, the actual impetus of the critique was that they did not approve of actions EAs do in the animal advocacy space.

Thus, I think a response from someone working in the (effective) animal advocacy space would be the most appropriate, to understand which of these critiques are pertinent to EAA vs. EA writ large, or alternatively useless in both scenarios.

Replies from: abrahamrowe
comment by abrahamrowe · 2020-10-21T13:13:40.702Z · EA(p) · GW(p)

I volunteered but didn't work in the animal advocacy space prior to EA (starting in maybe 2012 or so), but have worked at EA-aligned animal organizations, and been on the board of non-EA aligned (but I think very effective) animal organizations in recent years. Probably someone who worked more in the space prior to ~2014 or 2015 could speak more to what changed in animal advocacy from EA showing up.

The relevant quote:

The animal policy summit I attended in February permitted time for casual conversation among a variety of activists. These included sanctuary managers, directors of non-profits dedicated to ending factory farming, vegan educators, directors of veganism-oriented, anti-racist public health and food access programs, etc. It also included some academics. As some of the activists were talking, they got on to the topic of how charitable giving on EA’s principles had either deprived them of significant funding, or, through the threat of the loss of funding, pushed them to pursue programs at variance with their missions. There was general agreement that EA was having a damaging influence on animal advocacy.

I think that EA has definitely had some negative impact on animal advocacy, but overall has been very good for the space.

The Good

There is definitely way more funding in the space due to EA, and not less - OpenPhil makes up a massive percentage of overall animal welfare donations, and gives a large amount to groups who aren't purely dedicated to corporate welfare campaigns (though the OpenPhil gift itself might be restricted to welfare campaigns). Mercy For Animals, Animal Equality, etc., receive large gifts from OpenPhil and do vegan education / work to end factory farming, and not just reform it. ACE has probably brought in other EAs who would not have otherwise donated to animal welfare work (I'd guess at least a few million dollars a year). 

I think it is plausible that over the last few years, EA-aligned donors have stopped donating to some non-EA aligned organizations. Animal advocacy charities are generally very top-heavy — a huge percentage of donations are coming from a few people. If a couple of those people change where they are donating, it might significantly impact a charity, especially a smaller one. But, overall I'd guess that this isn't for purely EA reasons — lots of large donors in the space are investing in plant-based meat companies, for example, and might have chosen to do that independently of EA.

Also, EA has really opened up what I believe are the most promising avenues for future animal advocacy - addressing wild animal welfare (in a species-neutral way) and addressing invertebrate welfare. I think both areas would basically be impossible to fund in the short-term if EA funding wasn't available.

The Bad

I think the compelling critique of how EA has negatively impacted animal advocacy is something similar to the institutional critique the author presents. For example, at least early on, the focus on corporate campaigns meant that activities like community building were relatively neglected. I feel uncertain about the long-term impact of this, but I'd wager that most EAA organizations in the US, for example, have a lot more trouble getting volunteers to events than they did maybe 7-10 years ago or so. I think it's plausible that there are similar programmatic shifts away from activities that didn't have obvious impact that will harm the effectiveness of organizations down the line. Also, as the author says, this sort of critique could be viewed as an internal critique of activities, as opposed to a critique of EA as a whole.

There are probably some highly effective animal advocacy organizations totally neglected by EA (at least compared to ACE top charities). I also think that an GiveWell-style apples-to-apples comparison of different charities doing a similar and related activity doesn't necessarily make sense for, say, organizations doing corporate campaigns, since the organizations are highly coordinated [EA(p) · GW(p)]. But again, this seems like an internal critique.

I see ending factory farming / vegan advocacy as likely deeply aligned with EA. I think that the animal advocacy space really struggled to make progress on these issues over the past few decades, but has made more progress in the last 5 years. I don't know if this is due to plant-based meats becoming more popular, EA showing up, or something else, but broadly, we're doing better now than we were before, I think, at helping animals.

The "remark on institutional culture" is a pretty good critique of EA, though I don't know what to conclude from it. But, if the essay is focused on EAA specifically, I think that comment is a lot less relevant, as I'd guess as a whole, EAA is much more open to social justice / non-EA ethics, etc. than some other communities in EA.

Overall, most this critique just seems to be that the author just disagrees with many people in EA about ethics and metaethics.

Replies from: willbradshaw
comment by willbradshaw · 2020-10-22T07:53:21.365Z · EA(p) · GW(p)

As some of the activists were talking, they got on to the topic of how charitable giving on EA’s principles had either deprived them of significant funding, or, through the threat of the loss of funding, pushed them to pursue programs at variance with their missions.

The only way I can see this being true is if EAs convinced existing funding sources to switch their funding priorities along EA principles, or to (for some reason) move out of the field even though the new funding has priorities that differ from theirs. Has that happened? Otherwise, what happened to the funding that was already there?

Replies from: abrahamrowe
comment by abrahamrowe · 2020-10-22T13:24:28.458Z · EA(p) · GW(p)

I think it's plausible that some major funders stopped funding some groups  (like farm sanctuaries) in favor of ACE top charities, for example, but I doubt that it has happened with large numbers of smaller donors. But, it's hard to know how much EA is responsible for this. For example, when GFI was founded, I think a lot of people found it to be really compelling, independent of it be promising from an EA lens. While it's a fairly EA-aligned organization, in a world without EA, something like it probably would have been founded anyway, and because it  compelling, lots of donors might have switched from whatever they were donating to before to donating to GFI. My impression is also that a lot of funding that has left charities is going into investing in clean / plant-based meat companies. I also expect that would have happened had EA not existed.

Replies from: willbradshaw
comment by willbradshaw · 2020-10-22T13:44:42.891Z · EA(p) · GW(p)

Thanks for this perspective.

I'm arguing with the OP rather than you here, but this seems...straightforwardly good? Like, if a lot of other donors are switching to things more in line with EA priorities, that suggests that EA priorities (in this domain) are broadly convincing, which seems like it makes it much harder to argue that "EA was having a damaging influence on animal advocacy".

Replies from: Linch
comment by Linch · 2020-10-22T22:40:44.517Z · EA(p) · GW(p)

if a lot of other donors are switching to things more in line with EA priorities, that suggests that EA priorities (in this domain) are broadly convincing, which seems like it makes it much harder to argue that "EA was having a damaging influence on animal advocacy".

I may have misunderstood you, but I don't think this follows. There are some additional assumptions needed to make this true, for example (non-exhaustive):

  •  if you have a moderately strong prior that convincingness correlates with positive effects on the world.  or 
  • if you believe in the procedural justice/distributed decision-making of donors satisfying their preferences.

Presumably the OP does not believe either.

Replies from: willbradshaw
comment by willbradshaw · 2020-10-25T12:01:47.563Z · EA(p) · GW(p)

If something is broadly convincing – that is, convincing to altruistic donors with a range of different values and priorities – that is a pretty good sign that it is, in fact, solid. In the case of animal welfare, if a lot of non-EA donors have shifted their funding towards priorities that were originally pushed mainly by EAs, that seems like good evidence that shifting towards those priorities is good for animal welfare across a wide range of value systems, and hence (under moral uncertainty) more likely to be in fact a good thing. In that case,

There are certainly ways this could not be true, but I do think the above is the most likely / default case, and that the ways it could not be true are more complex stories requiring additional evidence. You need some mechanism by which EA funders influenced non-EA funders to change their priorities in a way that went against their values, or alternatively some mechanism by which EA funding "deprived [activists] of significant funding [etc]" despite the pre-existing non-EA funders still being around. And you need to provide evidence for that mechanism operating in this case, as opposed to (IMO the much more likely case of) people just being sad that other people think that their preferred approach is less good for animals.

comment by G Gordon Worley III (gworley3) · 2020-10-20T23:51:36.485Z · EA(p) · GW(p)

My somewhat uncharitable reaction while reading this was something like "people running ineffective charities are upset that EAs don't want to fund them, and their philosopher friend then tries to argue that efficiency is not that important".