[Linkpost] Some Thoughts on Effective Altruism

post by Sebastian Schwiecker (EA-Basti) · 2020-09-18T10:05:38.972Z · score: 25 (18 votes) · EA · GW · 25 comments

A critical look at Effective Altruism form the Guerilla Foundation: https://guerrillafoundation.org/some-thoughts-on-effective-altruism/.

I would be interested in your thoughts.

25 comments

Comments sorted by top scores.

comment by MichaelDickens · 2020-09-18T18:20:42.429Z · score: 49 (22 votes) · EA(p) · GW(p)

Agreed with Mathias that the authors have a good grasp of what EA is and what causes EAs prioritize, and I appreciate how respectful the article is. Also like Mathias, I feel like I have some pretty fundamental worldview differences from the authors, so I'm not sure how well I can explain my disagreements. But I'll try my best.

The article's criticism seems to focus on the notion that EA ignores power dynamics and doesn't address the root cause of problems. This is a pretty common criticism. I find it a bit confusing, and I don't really understand what the authors consider to be root causes. For example, efforts to create cheap plant-based or cultured meat seem to address the root cause of factory farming because, if successful, they will eliminate the need to farm and kill sentient animals. AI safety work, if successful, could eliminate the root causes of all suffering and bring about an unimaginably good utopia. But the authors don't seem to agree with me that these qualify as "addressing root causes". I don't understand how they distinguish between the EA work that I perceive as addressing root causes and the things they consider to be root causes. Critics like these authors seem to want EAs to do something that they're not doing, but I don't understand what it is.

[W]ealthy EA donors [do] not [go] through a (potentially painful) personal development process to confront and come to terms with the origins of their wealth and privilege: the racial, class, and gender biases that are at the root of a productive system that has provided them with financial wealth, and their (often inadvertent) role in maintaining such systems of exploitation and oppression.

It seems to me that if rich people come to terms with the origins of their wealth, they might conclude that they don't "deserve" it any more than poor people in Kenya, and decide to distribute the money to them (via GiveDirectly) instead of spending it on themselves. Isn't that ultimately the point? What outcome would the authors like to come out of this self-reflection, if not using their wealth to help disadvantaged people?

EAs spend more time than any other group I know talking about how they are among the richest people in the world, and they should use their wealth to help the less fortunate. But this doesn't seem to count in the authors' eyes.


This article argues that EAs fixate too much on "doing the most good", and then appears to argue that they believe people should focus on addressing root causes/grassroots activism/power dynamics/etc. because it will do the most good—or maybe I'm misinterpreting the article because I'm seeing it from an EA lens. Sometimes it seems like the authors disagree with EAs about fundamental principles like maximizing good, and other times it seems like they just disagree about what does the most good. I wasn't clear on that.

If they do agree in principle that we should do as much good as possible, then I would like to see a more rigorous justification for why the authors' favored causes do more good than EA causes. I realize they're not amenable to cost-effectiveness analysis than GiveWell's top charities, but I would like to see at least some attempt at a justification.

For example, many EAs prioritize existential risk. There's no rigorous cost-effective analysis of x-risk, but you can at least make an argument that it's more cost-effective than other things:

  1. Extinction is way worse than anything else.
  2. Extinction is not that unlikely.
  3. We can probably make significant progress on reducing extinction risk.

Bostrom basically makes this argument in Existential Risk Prevention as Global Priority.

My impression is there's a worldview difference between people who think it's possible in principle to make decisions under uncertainty, and people who think it's not. I don't have much to say in defense of the former position except to vaguely gesture in the direction of Phil Tetlock and the proven track record of some people's ability to forecast uncertain outcomes.


More broadly, I would have an easier time understanding articles like these if they gave more concrete examples of what they consider to be the best things to work on, and why—something more specific than "grassroots activism". For example (not saying I think the authors believe this, just that this is the general sort of thing I'd like to see):

We should support community groups that organize meetups where they promote the idea of the fundamental unfairness of global wealth inequality. We believe that once sufficiently many people worldwide are paying attention to this problem, people will develop and move toward a new system of government that will redistribute wealth and provide basic services to everyone. We aren't sure what this government structure will look like, but we're confident that it's possible because [insert argument here]. We also believe this plan has a good chance of getting broad support because [insert argument here], and that once it has broad support, it has a good chance of actually getting implemented, because [insert argument here].

comment by Alex319 · 2020-09-18T20:09:54.162Z · score: 28 (12 votes) · EA(p) · GW(p)

As for the question of "what do the authors consider to be root causes," here's my reading of the article. Consider the case of factory farming. Probably all of us agree that the following are all necessary causes:

(1) There's lots of demand for meat.

(2) Factory farming is currently the technology that can produce meat most efficiently and cost-effectively.

(3) Producers of meat just care about production efficiency and cost-effectiveness, not animal suffering.

I suspect you and other EAs focus on item (2) when you are talking about "root causes." In this case, you are correct that creating cheap plant-based meat alternatives will solve (2). However, I suspect the authors of this article think of (3) as the root cause. They likely think that if meat producers cared more about animal suffering, then they would stop doing factory farming or invest in alternatives on their own, and philanthropists wouldn't need to support them. They write:

if all investment was directed in a responsible way towards plant-based alternatives, and towards safe AI, would we need philanthropy at all

Furthermore, they think that since the cause of (3) is a focus on cost-effectiveness (in the sense of minimizing cost per pound of meat produced), then focusing on cost-effectiveness (in the sense of minimizing cost per life saved, or whatever) in philanthropy promotes more cost-effectiveness focused thinking, which makes (3) worse. And they think lots of problems have something like (3) as a root cause. This is what they mean when they talk about "values of the old system" in this quote:

By asking these questions, EA seems to unquestioningly replicate the values of the old system: efficiency and cost-effectiveness, growth/scale, linearity, science and objectivity, individualism, and decision-making by experts/elites.

As for the other quote you pulled out:

[W]ealthy EA donors [do] not [go] through a (potentially painful) personal development process to confront and come to terms with the origins of their wealth and privilege: the racial, class, and gender biases that are at the root of a productive system that has provided them with financial wealth, and their (often inadvertent) role in maintaining such systems of exploitation and oppression.

and the following discussion:

To be more concrete, I suspect what they're talking about is something like the following. Consider a potential philanthropist like Jeff Bezos - they likely believe that Amazon has harmed the world through their business practices. Let's say Jeff Bezos wanted to spend $10 billion of his wealth on philanthropy. There might be two ways of doing that:

(1) Donate $10 billion to worthy causes.

(2) Change Amazon's business practices such that he makes $10 billion less money, but Amazon has a more positive (or less negative) impact on the world.

My reading is that the authors believe (2) would be of higher value, but Bezos (and others like him) would be biased toward (1) for self-serving reasons: Bezos would get more direct credit for doing (1) than (2), and Bezos would be biased toward underestimating how bad Amazon's business practices are for the world.

---

Overall, though I agree with you that if my interpretation accurately describes the author's viewpoint, the article does not do a good job arguing for that. But I'm not really sure about the relevance of your statement:

My impression is there's a worldview difference between people who think it's possible in principle to make decisions under uncertainty, and people who think it's not. I don't have much to say in defense of the former position except to vaguely gesture in the direction of Phil Tetlock and the proven track record of some people's ability to forecast uncertain outcomes.

Do you think that the article reflects a viewpoint that it's not possible to make decisions under uncertainty? I didn't get that from the article; one of their main points is that it's important to try things even if success is uncertain.

comment by MichaelDickens · 2020-09-18T21:16:52.043Z · score: 10 (6 votes) · EA(p) · GW(p)

Thanks, this comment makes a lot of sense, and it makes it much easier for me to conceptualize why I disagree with the conclusion.

Do you think that the article reflects a viewpoint that it's not possible to make decisions under uncertainty?

I think so, because the article includes some statements like,

"How could anyone forecast the recruitment of thousands of committed new climate activists around the world, the declarations of climate emergency and the boost for NonViolentDirectAction strategies across the climate movement?"

and

"[C]omplex systems change can most often emerge gradually and not be pre-identified ‘scientifically’."

Maybe instead of "make decisions under uncertainty", I should have said "make decisions that are informed by uncertain empirical forecasts".

comment by Matt_Lerner (mattlerner) · 2020-09-18T22:58:56.069Z · score: 6 (4 votes) · EA(p) · GW(p)

I can get behind your initial framing, actually. It's not explicit—I don't think the authors would define themselves as people who don't believe decision under uncertainty is possible—but I think it's a core element of the view of social good professed in this article and others like it.

A huge portion of the variation in worldview between EAs and people who think somewhat differently about doing good seems to be accounted for by a different optimization strategy. EAs, of course, tend to use expected value, and prioritize causes based on probability-weighted value. But it seems like most other organizations optimize based on value conditional on success.

These people and groups select causes based only on perceived scale. They don't necessarily think that malaria and AI risk aren't important, they just make a calculation that allots equal probabilities to their chances of averting, say, 100 malarial infections and their chances of overthrowing the global capitalist system.

To me, this is not necessarily reflective of innumeracy or a lack of comfort with probability. It seems more like a really radical second- and third-order uncertainty about the value of certain kinds of reasoning— a deep-seated mistrust of numbers, science, experts, data, etc. I think the authors of the posted article lay their cards on the table in this regard:

the values of the old system: efficiency and cost-effectiveness, growth/scale, linearity, science and objectivity, individualism, and decision-making by experts/elites

These are people who associate the conventions and methods of science and rationality with their instrumental use in a system that they see as inherently unjust. As a result of that association, they're hugely skeptical about the methods themselves, and aren't able or willing to use them in decision-making.

I don't think this is logical, but I do think it is understandable. Many students, in particular American ones (though I recognize that Guerrilla is a European group) have been told repeatedly, for many years, that the central value of learning science and math lies in getting a good job in industry. I think it can be hard to escape this habituation and see scientific thinking as a tool for civilization instead of as some kind of neoliberal astrology.

comment by Larks · 2020-09-20T21:44:35.428Z · score: 8 (4 votes) · EA(p) · GW(p)
A huge portion of the variation in worldview between EAs and people who think somewhat differently about doing good seems to be accounted for by a different optimization strategy. EAs, of course, tend to use expected value, and prioritize causes based on probability-weighted value. But it seems like most other organizations optimize based on value conditional on success.
These people and groups select causes based only on perceived scale. They don't necessarily think that malaria and AI risk aren't important, they just make a calculation that allots equal probabilities to their chances of averting, say, 100 malarial infections and their chances of overthrowing the global capitalist system.

I agree it would be good to have a diagnosis of the thought process that generates these sorts of articles so we can respond in a targetted manner that addresses their model of their objections, rather that one which simply satisfies us that we have rebutted them. And this diagnosis is a very interesting one! However, I am a little sceptical, for two reasons.

EAs often break cause evaluation down into Scope, Tractability and Neglectedness, which is elegant as they correspond to three factors which can be multiplied together. You're basically saying that these critics ignore (or consider unquantifiable) Neglectedness and Tractability. However, it seems perhaps a little bit of a coincidence that the factor they are missing just happens to correspond to one of the terms in our standard decomposition. After all, there are many other possible decompositions! But maybe this decomposition just really captures something fundamental to all people's thought processes, in which case this is not so much of a surprise.

But more importantly I think this theory seems to give some incorrect predictions about cause focus. If Importance is all that matters, then I would expect these critics to be very interested in existential risks, but my impression is they are not. Similarly, I would be very surprised if they were dismissive of e.g. residential recycling, or US criminal justice, as being too small a scale an issue to warrant much concern.

comment by Matt_Lerner (mattlerner) · 2020-09-20T22:38:02.820Z · score: 2 (2 votes) · EA(p) · GW(p)

I think scale/scope is a pretty intuitive way of thinking about problems, which is I imagine why it's part of the ITN framework. To my eye, the framework is successful because it reflects intuitive concepts like scale, so I don't see too much of a coincidence here.

If Importance is all that matters, then I would expect these critics to be very interested in existential risks, but my impression is they are not. Similarly, I would be very surprised if they were dismissive of e.g. residential recycling, or US criminal justice, as being too small a scale an issue to warrant much concern.

This is a good point. I don't see any dissonance with respect to recycling and criminal justice—recycling is (nominally) about climate change, and climate change is a big deal, so recycling is important when you ignore the degree to which it can address the problem; likewise with criminal justice. Still, you're right that my "straw activist" would probably scoff at AI risk, for example.

I guess I'd say that the way of thinking I've described doesn't imply an accurate assessment of problem scale, and since skepticism about the (relatively formal) arguments on which concerns about AI risk are based is core to the worldview, there'd be no reason for someone like this to accept that some of the more "out there" GCRs are GCRs at all.

Quite separately, there is a tendency among all activists (EAs included) to see convergence [EA · GW] where there is none, and I think this goes a long way toward neutralizing legitimate but (to the activist) novel concerns. Anecdotally, I see this a lot—the proposition, for instance, that international development will come "along for the ride" when the U.S. gets its own racial justice house in order, or that the end of capitalism necessarily implies more effective global cooperation.

comment by Larks · 2020-09-21T02:01:42.748Z · score: 8 (4 votes) · EA(p) · GW(p)
I don't see any dissonance with respect to recycling and criminal justice—recycling is (nominally) about climate change, and climate change is a big deal, so recycling is important when you ignore the degree to which it can address the problem; likewise with criminal justice.

It seems a lot depends on how you group together things into causes then. Is my recycling about reducing waste in my town (a small issue), preventing deforestation (a medium issue), fighting climate change (a large issue) or being a good person (the most important issue of all)? Pretty much any action can be attached to a big cause by defining an even larger, and even more inclusive problem for it to be part of.

comment by Alex319 · 2020-09-19T03:08:15.525Z · score: 2 (2 votes) · EA(p) · GW(p)

A more charitable interpretation of the author's point might be something like the following:

(1) Since EAs look at quantitative factors like the expected number of lives saved by an intervention, they need to be able to quantify their uncertainty.

(2) Interventions that target large, interconnected systems are harder to quantify the results of than interventions that target individuals. For instance, consider health-improving interventions. The intervention "give medication X to people who have condition Y" is easy to test with an RCT. However, the intervention "change the culture to make outdoor exercise seem more attractive" is much harder to test: it's harder to target cultural change to a particular area (and thus it's harder to do a well-controlled study), and the causal pathways are a lot more complex (e.g. it's not just that people get more exercise, it might also encourage changes in land-use patterns, which would affect traffic and pollution, etc.) so it would be harder to identify what was due to the change.

(3) Thus, EA approaches that focus on quantifying uncertainty are likely to miss interventions targeted at systems. Since most of our biggest problems are caused by large systems, EA will miss the highest-impact interventions.

comment by Matt_Lerner (mattlerner) · 2020-09-19T04:47:33.347Z · score: 2 (2 votes) · EA(p) · GW(p)

This is certainly a charitable reading of the article, and you are doing the right thing by trying to read it as generously as possible. I think they are indeed making this point:

the technocratic nature of the approach itself will only very rarely result in more funds going to the type of social justice philanthropy that we support with the Guerrilla Foundation – simply because the effects of such work are less easy to measure and they are less prominent among the Western, educated elites that make up the majority of the EA movement

This criticism is more than fair. I have to agree with it and simultaneously point out that of course this is a problem that many are aware of and are actively working to change. I don't think that they're explicitly arguing for the worldview I was outlining above. This is my own perception of the motivating worldview, and I find support in the authors' explicit rejection of science and objectivity.

comment by MichaelStJules · 2020-09-21T04:36:02.249Z · score: 4 (3 votes) · EA(p) · GW(p)

I think leftists are primarily concerned with oppression, exploitation, hierarchy and capitalism as root causes. That seems to basically be what it means to be a leftist. Poverty and environmental destruction are the result of capitalist greed and exploitation. Factory farming is the result of speciesist oppression and capitalism.

comment by MichaelStJules · 2020-09-26T16:13:17.859Z · score: 2 (1 votes) · EA(p) · GW(p)

Oppression, exploitation, hierarchy and capitalism are also seen as causes of many of the worst ills in the world, perhaps even most of them.

EDIT: I'm not claiming this is an accurate view of the world; this is my (perhaps inaccurate) impression of the views of leftists.

comment by PaoloFresia · 2020-09-23T13:34:42.098Z · score: 47 (19 votes) · EA(p) · GW(p)

Hello, I'm Paolo, one of the authors of the article. We were pointed to this thread and we've been thrilled to witness the discussion it's been generating. Romy and I will take some time to go through all your comments in the coming days and will aim to post a follow up blog post in an attempt to answer to the various points raised more comprehensively. In the meantime, please keep posting here and keep up the good discussion! Thanks!

comment by Ben_West · 2020-09-25T22:37:03.748Z · score: 27 (10 votes) · EA(p) · GW(p)

I'm excited to hear that! Looking forward to seeing the article. I particularly had trouble distinguishing between three potential criticisms you could be making:

  1. It's correct to try do the most good, but people who call themselves "EA's" define "good" incorrectly. For example, EA's might evaluate reparations on the basis of whether they eliminate poverty as opposed to whether they are just.
  2. It's correct to try to do the most good, but people who call themselves "EA's" are just empirically wrong about how to do this. For example, EA's focus too much on short-term benefits and discount long-term value.
  3. It's incorrect to try to do the most good. (I'm not sure what the alternative you are proposing in your essay is here.)

If you are able to elucidate which of these criticisms, if any, you are making, I would find it helpful. (Michael Dickens writes something similar above.)

comment by Thomas Kwa (tkwa) · 2020-10-09T00:02:13.327Z · score: 13 (5 votes) · EA(p) · GW(p)

Two subcategories of idea 3 that I see, and my steelman of each:

3a. To maximize good, it's incorrect to try to do the most good. Most people who apply a maximization framework to the amount of justice will create less justice than people who build relationships and seek to understand power structures, because thinking quantitatively about justice is unlikely to work no matter how carefully you think. Or, most of the QALYs that we can create result from other difficult-to-quantitatively-maximize things like ripple effects from others modeling our behavior. Trying to do the most good will create less good than some other behavior pattern.

3b. "Good" cannot be quantified even in theory, except in the nitpicky sense that mathematically, an agent with coherent preferences acts as if it's maximizing expected utility. Such a utility function is meaningless. Maybe the utility function is maximized by doing X even though you think X results in a worse world than Y. Maybe doing Z maximizes utility, but only if you have a certain psychological framing. Even though this doesn't make sense, the decisions are still morally correct.

comment by Milan_Griffes · 2020-10-09T22:47:44.301Z · score: 3 (2 votes) · EA(p) · GW(p)

I think something like 3a is right, especially given our cluelessness [EA · GW].

comment by Benjamin_Todd · 2020-09-28T11:47:17.999Z · score: 21 (7 votes) · EA(p) · GW(p)

Hi Paolo, I apologise this is just a hot take, but from quickly reading the article, my impression was that most of the objections apply more to what we could call the 'near termist' school of EA rather than the longtermist one (which is very happy to work on difficult-to-predict or quantify interventions). You seem to basically point this out at one point in the article. When it comes to the longtermist school, my impression is that the core disagreement is ultimately about how important/tractable/neglected it is to do grassroots work to change the political & economic system compared to something like AI alignment. I'm curious if you agree.

comment by Alex319 · 2020-09-26T02:59:13.041Z · score: 6 (4 votes) · EA(p) · GW(p)

You mention that:

Neither we nor they had any way of forecasting or quantifying the possible impact of [Extinction Rebellion]

and go on to talk about this is an example of the type of intervention that EA is likely to miss due to lack of quantifiability.

One think that would help us understand your point is to answer the following question:

If it's really not possible to make any kind of forecast about the impact of grassroots activism (or whatever intervention you would prefer), then on what basis do you support your claim that supporting grassroots activism would improve its impact? And how would you have any idea which groups or which forms of activism to fund, if there's no possible way of forecasting which ones will work?

I think the inferential gap here is that (we think that) you are advocating for an alternative way of justifying [the claim that a given intervention is impactful] other than the traditional "scientific" and "objective" tools (e.g. cost-benefit analysis, RCTs) , but we're not really sure what you think that alternative justification would look like or why it would push you towards grassroots activism.

I suspect that you might be using words like "scientific", "objective", and "rational" in a narrower sense than EAs think of them. For instance, EAs don't believe that "rationality" means "don't accept any idea that is not backed by clear scientific evidence," because we're aware that often the evidence is incomplete, but we have to make a decision anyway. What a "rational" person would say in that situation is something more like "think about what we would expect to see in a world where the idea is true compared to what we would expect to see if it were false, see which is closer to what we do see, and possibly also look at how similar things have turned out in the past."

comment by alexherwix · 2020-09-30T14:18:29.065Z · score: 5 (3 votes) · EA(p) · GW(p)

One consideration that came to my mind at multiple times of the post was that I was trying to understand what your angle for writing the post was.  So while I think that the post was written with the goal of demarcating and pushing "your brand" of radical social justice from EA, you clearly seem to agree with the core "EA assumption" (i.e., that it's good to use careful reasoning and evidence to try to make the world better) even though you disagree on certain aspects about how to best implement this in practice. 

Thus, I would really encourage you to engage with the EA community in a collaborative and open spirit. As you can tell by the reactions here, criticism is well appreciated by the EA community if it is well reasoned and articulated. Of course there are some rules to this game (i.e., as mentioned elsewhere you should provide justification for your believes) but if you have good arguments for your position you might even affect systemic change in EA ;) 

comment by MathiasKirkBonde · 2020-09-18T16:52:29.564Z · score: 14 (13 votes) · EA(p) · GW(p)

It's very refreshing to read a criticism of EA that isn't chock-full of straw men.
Kudos to the authors for doing their best to represent EA fairly.
That's not usually the case for articles that accuse EA of neglecting 'systemic change'.

That said, their worldview feels incredibly alien to me.
It's difficult for me to state any point where I think they make clear errors.
Rather, it seems I just have entirely different priors than the authors.
What they take for granted, I find completely unintuitive.

Writing in length about where our priors seem to differ, would more or less be a rehash of prior debates on EA and systemic change.

I would love to have the authors of this come on an EA podcast, and hear their views expressed in more detail. Usually when I think something is clearly wrong I can explain why, here I can't.

It would be a shame if I were wrong longer than necessary.

comment by Linch · 2020-09-26T09:56:27.512Z · score: 13 (9 votes) · EA(p) · GW(p)

I think I find myself confused about it means for something to have a "single root cause."Having not thought about it too much, I personally currently think the idea looks conceptually confused. I am not a philosopher; however here are some issues I have with this conception:

1. Definitional boundaries

First of all, I think this notion of causation is kinda confused in some important ways, and it'd be surprising to have discrete cleanly-defined causes to map well to a "single root cause" in a way/idea that is easy for humans to understand.

2. Most things have multiple "root causes"

Secondly, in practice I feel like mostly things I care about are due to multiple causes, at least if 1) you only use "causes" as defined in a way that's easy for humans to understand and 2) you only go back far enough to causal chains that are possible to act on. For example, there's a sense of the root cause of factory farming obviously being the Big Bang, but in terms of things we can act on, factory farming is caused by:

1) A particular species of ape evolved to have a preference for the flesh of other animals.

2) That particular species of ape have a great deal of control over other animals, and the external environment

3) Factory farming is currently the technology that can produce meat most efficiently and cost-effectively.

4) Producers of meat (mostly) just care about production efficiency and cost-effectiveness, not animal suffering.

5) The political processes and coordination mechanisms across species is primarily through parasitism and predation rather than more cooperative mechanisms.

6) The political processes and coordination mechanisms within a particular species of ape is such that it is permissible for producers of meat to cause animal suffering.

... (presumably many others that I'm not creative/diligent enough to list).

How do we determine which of the 6+ causes are "real" root causes?

From what I understand of the general SJ/activism milieu, the perception is that interventions that attempt to change #6 counts as "systemic change," but interventions that change #1, #2 (certain forms of AI alignment), #3 (plant-based/clean meat), #4 (moral circle expansion, corporate campaigns), #5 (uplifting, Hedonistic Imperative) do not. This seems awfully suspicious to me, as if people had a predetermined conclusion.

3. Monocausal diagrams can still have intervention points to act on, and it's surprising if the best intervention point/cause is the first ("root") one.

Thirdly, even if you (controversially, in my view) can draw a clean causal diagram such that a bad action is monocausal and there's a clean chain from A->B->C->...->{ bad thing}, in practice it is still not obvious to me (and indeed would be rather surprising!) if there's a definitive status of A as the "real" root cause, in a way that's both well-defined and makes it such that A is uniquely the only thing you can act on.

comment by MichaelStJules · 2020-09-26T16:30:35.186Z · score: 3 (2 votes) · EA(p) · GW(p)

Maybe by "root cause", they mean causes that are common to many or even most of the world's worst ills (and also that can be acted upon, as you suggest)? You write a joint causal diagram for them, and you find that oppression, exploitation, hierarchy and capitalism are causes for most of them and fairly unique in this way.

Are there other causes that are so cross-cutting (depending on your ethical views)? 

1. Humans not being more ethical, reflective and/or rational.

2. Sentient individuals exist at all (for ethical antinatalists and efilists).

3. Suffering is still physically possible among sentient individuals (the Hedonistic Imperative).

comment by Linch · 2020-09-29T11:46:29.974Z · score: 5 (3 votes) · EA(p) · GW(p)

I really like the conception of thinking of root causes in terms of a "joint causal diagram!" Though I'd like to understand if this is an operationalization that leftist scholars would also agree with, at the risk of this being a "steelman" that is very far away from the intended purpose.

Still it's interesting to think about.

comment by Linch · 2020-09-29T11:52:07.002Z · score: 4 (3 votes) · EA(p) · GW(p)

I think there aren't many joint root causes since so many of them are less about facts of the world and depend implicitly on your normative ethics. (As a trivial example, there's a sense in which the root cause of poverty, climate change and species extinctions is human population if you have an average utilitarian stance, but for many other aggregative views, trying to fix this will be abhorrent).

Some that I can think of:

1. A world primarily ruled by humans, instead of (as you say) "more ethical, reflective and/or rational" beings.

1a. evolution

1b. humans evolving from small-group omnivores instead of large-group herbivores

2. Coordination problems

3. Insufficient material resources

4. Something else?

I also disagree with the idea that "capitalism"(just to pick one example) is the joint root cause for most of the world's ills.

A. This is obviously wrong compared to something like evolution.

B. Global poverty predates capitalism and so does wild animal suffering, pandemic risk, asteroid risk, etc. (Also other problems commonly talked about like racism, sexism, biodiversity loss)

C. No obvious reason why non-capitalist individual states (in an anarchic world order) would not still have major coordination problems around man-made existential risks and other issues.

D. Indeed, we have empirical experience of the bickering and rising tensions between Communist states in the mid-late 1900s.

comment by MichaelStJules · 2020-09-29T21:52:53.530Z · score: 4 (2 votes) · EA(p) · GW(p)

I also disagree with the idea that "capitalism"(just to pick one example) is the joint root cause for most of the world's ills.

A. This is obviously wrong compared to something like evolution.

B. Global poverty predates capitalism and so does wild animal suffering, pandemic risk, asteroid risk, etc. (Also other problems commonly talked about like racism, sexism, biodiversity loss)

C. No obvious reason why non-capitalist individual states (in an anarchic world order) would not still have major coordination problems around man-made existential risks and other issues.

D. Indeed, we have empirical experience of the bickering and rising tensions between Communist states in the mid-late 1900s.

A leftist might not claim capitalism is the only joint root cause. But to respond to each:

A. Can't change the past, so not useful.

B. This isn't a counterfactual claim about what would happen if we replaced capitalism with some specific different system. Capitalism allows these issues, while another system might not, so in counterfactual terms, capitalism can still be a cause. (But socialist countries were often racist and homophobic. So socialism doesn't solve the issue, but again, many of today's (Western?) leftists aren't only concerned with capitalism, but also oppression and hierarchy generally, and may have different specific systems in mind.) I don't know to what extent leftists think of causes in such counterfactual terms instead of historical terms, though.

C. Leftists might think certain systems would be better than capitalist ones on these issues, and have reasons for those beliefs. For what it's worth, systems also shape people's attitudes or attitudes would covary with the system, so if greed is a major cause of these issues and it's suppressed under a specific non-capitalist system, this might partially address these issues. Also, some leftists want to reform the global world order, too. Socialist world government? Leftists disagree on how much should be top-down vs decentralized, though.

D. Not the systems they have in mind anymore. I think a lot of (most?) (Western?) leftists have moved onto some kind of social democracy (technically still capitalist), democratic socialism or anarchism. 

comment by tae · 2020-09-26T02:54:44.033Z · score: 6 (5 votes) · EA(p) · GW(p)

This seems like an incredibly interesting and important discussion! I don't have much time now, but I'll throw in some quick thoughts and hopefully come back later.

I think that there is room for Romy and Paolo's viewpoint in the EA movement. Lemme see if I can translate some of their points into EA-speak and fill in some of their implicit arguments. I'll inevitably use a somewhat persuasive tone, but disagreement is of course welcome.

(For context, I've been involved in EA for about six years now, but I've never come across any EAs in the wild. Instead, I'm immersed in three communities: Buddhist, Christian, and social-justice-oriented academic. I'm deeply committed to consequentialism, but I believe that virtues are great tools for bringing about good consequences.)

---

I think the main difference between Guerrilla's perspective and the dominant EA perspective is that Guerrilla believes that small actions, virtues, intuitions, etc. really matter. I'm inclined to agree.

Social justice intuition says that the fundamental problem behind all this suffering is that powerful/privileged people are jerks in various ways. For example, colonialism screwed up Africa's thriving (by the standards of that time) economy. (I'm no expert, but as far as I know, it seems highly likely that African communities would have modernized into flourishing places if they weren't exploited.) As another example, privileged people act like jerks when they spend money on luxuries instead of donating.

Spiritual intuition, from Buddhism, Christianity, and probably many other traditions, says that the reason powerful/privileged people are jerks is that they're held captive by greed, anger, delusion, and other afflictive emotions. For example, it's delusional and greedy to think that you need a sports car more than other people need basic necessities.

If afflictive emotions are the root cause of all the world's ills, then I think it's plausible to look to virtues as a solution. (I interpret "generating the political will" to mean "generating the desire for specific actions and the dedication to follow through", which sound like virtues to me.) In particular, religions and social justice philosophers seem to agree that it's important to cultivate a genuine yearning for the flourishing of all sentient beings. Other virtues--equanimity, generosity, diligence--obviously help with altruistic endeavors. Virtues can support the goal of happiness for all in at least three ways. First, a virtuous person can help others more effectively. Compassion and generosity help them to gladly share their resources, patience helps them to avoid blowing up with anger and damaging relationships, and perseverance helps them to keep working through challenges. Second, people who have trained their minds are themselves happier with their circumstances (citation needed). Great, now there's less work for others to do! Third, according to the Buddhist tradition, a virtuous person knows better what to do at any given moment. By developing compassion, one develops wisdom, and vice versa. The "Effective" and the "Altruism" are tied together. This makes sense because spiritual training should make one more open, less reactive, and less susceptible to subconscious habits; once these obscurations are removed, one has a clearer view of what needs to be done in any given moment. You don't want to act on repressed fear, anger, or bigotry by accident! To riff off Romy and Paolo's example of "wealthy EA donors" failing to work on themselves, their ignorance of their own minds may have real-world consequences when they don't even notice that they could support systemic change at their own organizations. The argument here is that our mental states have significant effects on our actions, so we'd better help others by cleaning up our harmful mental tendencies.

Maybe this internal work won't bear super-effective fruit immediately, but I think it's clear that mind-training and wellbeing create a positive feedback loop. Investing now will pay off later: building compassionate and wise communities would be incredibly beneficial long-term.

---

Miscellaneous points in no particular order:

"EA seems to unquestioningly replicate the values of the old system: efficiency and cost-effectiveness, growth/scale, linearity, science and objectivity, individualism, and decision-making by experts/elites".

Here's how I interpret the argument: historically, people who value these things have gone on to gain a bunch of power and use it to oppress others. This is evidence that valuing these things leads to bad consequences. Therefore, we should try to find values that have better track records. I'd be fascinated to see a full argument for or against this chain of reasoning.

More factors that may or may not matter: Greed might be the root cause of someone's aspiration toward efficiency+growth. A lack of trust+empathy might lead someone to embrace individualism. Giving power to experts/elites suggests a lack of respect for non-elites.

"In short, we believe that EA could do more to encourage wealth owners to dig deep to transform themselves to build meaningful relationships and political allyship that are needed for change at the systems level."

If you assume that spreading virtues is crucial, as I've argued above, and if virtues can spread throughout networks of allies, then you should build those networks.

We would suspect that donors and grant managers with a deep emotional connection to their work and an actual interest to have their personal lives, values and relationships be touched by it will stick with it and go the extra mile to make a positive contribution, generating even more positive outcomes and impact.

We need mind training so that we can help impartially. Impartiality is compatible with cultivating "warm" qualities like trust and relationships. Julia Wise explains why no one is a statistic: http://www.givinggladly.com/2018/10/no-one-is-statistic.html

More philanthropic funding, about half of it we would argue, should go to initiatives that are still small, unproven and/or academically ‘unprovable’, that tackle the system rather than the symptoms, and adopt a grassroots, participatory bottom-up approach to finding alternative solutions, which might bear more plentiful fruit in the long run."

Sounds like a good consequentialist thesis that fits right in in EA!