Reading recommendations for the problem of consequentialist scope?

post by Milan_Griffes · 2017-08-02T02:07:46.769Z · score: 6 (6 votes) · EA · GW · Legacy · 5 comments

Determining which scope of outcomes to consider when making a decision seems like a difficult problem for consequentialism. By "scope of outcomes" I mean how far into the future and how many links in the causal chain to incorporate into decision-making. For example, if I'm assessing the comparative goodness of two charities, I'll need to have some method of comparing future impacts (perhaps "consider impacts that occur in the next 20 years") and flow-through contemporaneous impacts (perhaps "consider the actions of the charitable recipient, but not the actions of those they interact with").

I'm using "consequentialist scope" as a shorthand for this type of determination because I'm not aware of a common-usage word for it.

Consequentialist scope seems both (a) important and (b) difficult to think about clearly, so I want to learn more about it.

Does anyone have reading recommendations for this? Philosophy papers, blog posts, books, whatever. I didn't encounter it in Reasons and Persons, but I've only read the first third so far.


Comments sorted by top scores.

comment by Brian_Tomasik · 2017-08-02T08:34:46.592Z · score: 8 (8 votes) · EA(p) · GW(p)

I'd be interested in literature on this topic as well, because it seems to bedevil all far-future-aware EA work.

Some articles:

comment by CalebWithers · 2017-08-03T01:32:45.259Z · score: 3 (3 votes) · EA(p) · GW(p)

I'll throw in Bostrom's 'Crucial Considerations and Wise Philanthropy', on "considerations that radically change the expected value of pursuing some high-level subgoal".

comment by JanBrauner · 2017-08-02T09:02:48.795Z · score: 1 (1 votes) · EA(p) · GW(p)

This could be constructed as arguing for an approach that takes all perspectives that one can think of into account, and then discount them by uncertainty.

comment by KevinWatkinson · 2017-08-04T13:14:28.370Z · score: 0 (0 votes) · EA(p) · GW(p)

I don't have any reading recommendations on this subject, but i'm interested to learn more about the issue (i'll check out the links people have suggested below).

I generally believe that non-profits should be doing some of the work themselves when it relates to becoming a top EA recommended charity. I guess we might go further than they do, but i believe they ought to demonstrate the basis for being recipients of funding, rather than say, relying on external evaluation which can be time consuming and highly selective.

If we are comparing two charities that haven't been considered before, i would wonder about the reasons they might be neglected, and the justification for that. Some of the reasons can be quite wide ranging, including scepticism of EA, or they operate outside the general range.

I think larger groups ought to have the resources to complete fundamental work, and it ought to be part of sound process (the framework for selecting interventions for instance) smaller more promising groups could be allocated funding and support to do more of this work.

I'm presently fairly uncertain that EA supported non-profits are completing fundamental work in terms of EA values (in the animal movement anyway, of which there seems to be some scarcity of evidence) and so i think there could be reason to do more work in establishing the present. Though that isn't an argument against considering the future, or working out how to do it better, but it is difficult to consider the future if we are not sufficiently aware of the present.