How can I apply person-affecting views to Effective Altruism?

post by New_EA · 2020-04-29T04:57:11.042Z · score: 19 (11 votes) · EA · GW · No comments

This is a question post.

Contents

  Answers
    19 MichaelPlant
    14 Robert_Wiblin
    13 technicalities
    9 trammell
    6 evelynciara
    2 MichaelStJules
    1 Rowan_Stanley
    0 Mati_Roy
    -1 ofer
None
No comments

Hey everyone! I'm very interested in Effective Altruism, and most of my information on it comes from 80,000 Hours' website. The info is very useful, but, as the title of this question suggests, I hold person-affecting views, so occurs to me that the world's largest-scale and most serious problems might be different in my own worldview than in theirs (if you aren't familiar with the term, person-affecting views are views that actions are only morally relevant to beings that will exist independent of whether or not the action is taken; for example, I think the world ending would be bad because 7 billion people would die, but not because their descendants were prevented from ever being born). Does anyone have thoughts for where I can find problem profiles and recommendations for an Effective Altruism lifestyle based on a person-affecting worldview (especially for a conservative Christian worldview)?

Answers

answer by MichaelPlant · 2020-04-29T09:50:12.374Z · score: 19 (12 votes) · EA(p) · GW(p)

I'm struggling to think of much written on this topic - I'm a philosopher and reasonably sympathetic to person-affecting views (although I don't assign them my full credence) so I've been paying attention to this space. One non-obvious consideration is whether to take an asymmetric person-affecting view (extra happy lives have no value, extra unhappy lives has negative value) or a symmetric person-affecting view (extra lives have no value).

If the former, one is pushed towards some concern for the long-term anyway, as Halstead argues here [EA · GW], because there will be lots of unhappy lives in the future it would be good to prevent existing.

If the latter - which I think, after long-reflection, is the more plausible version, even though it is more prima facie unintuitive - then that is practically sufficient, but not necessary, for concentrating on the near-term, i.e. this generation of humans; animals won't, for the most part, exist whatever we choose to do. I say not necessary because one could, in principle, think all possible lives matter and still focus on near-humans due to practical considerations.

But 'prioritise current humans' still leaves it wide-open what should you do. The 'canonical' EA answer for how to help current humans is by working on global (physical) health and development. It's not clear to me that this is the right answer. If I can be forgiven for tooting my own horn, I've written a bit about this in this (now somewhat dated) post on mental health [EA · GW], the relevant section being "why might you - and why might you not - prioritise this area [i.e. mental health]".

comment by MichaelStJules · 2020-04-29T20:50:03.158Z · score: 2 (1 votes) · EA(p) · GW(p)
If the latter - which I think, after long-reflection, is the more plausible version, even though it is more prima facie unintuitive - then that is practically sufficient, but not necessary, for concentrating on the near-term, i.e. this generation of humans; animals won't, for the most part, exist whatever we choose to do. I say not necessary because one could, in principle, think all possible lives matter and still focus on near-humans due to practical considerations.

You could rescue or even buy animals from factory farms. Plausibly, doing this for factory farmed chickens could be very cost-effective with such person-affecting views. Buying them from factory farms in developing countries, especially, perhaps. Buying factory farmed animals would be pretty uncooperative with the rest of the animal movement, though, and if you assign some moral weight to asymmetric or symmetric totalist views, this could be pretty bad in expectation (although the expected effect on supply is less than one per animal saved, so this might not look actively harmful with symmetric views).

EDIT: The value of information question is interesting. Suppose it would take you 2 months to research and carry out a rescue/buy for factory farmed chickens raised for meat. Then it wouldn't be worth even looking into, because the chickens alive when you start will all have been killed already. But if someone does enough of the work for you that you could do it within about a month, then it could be worth it to do. Egg-laying hens live longer, probably about a year or two.

Working on abortion might be similar for someone who thought death was bad.

comment by MichaelPlant · 2020-04-30T17:34:20.111Z · score: 4 (2 votes) · EA(p) · GW(p)

Yes, agree you could save existing animals. I'd actually forgotten until you jogged my memory, but I talk about that briefly in my thesis (chapter 3.3, p92) and suppose saving animals from shelters might be more cost-effective than saving humans (given a PAV combined with deprivationism about the badness of death).

answer by Robert_Wiblin · 2020-05-06T12:47:58.632Z · score: 14 (6 votes) · EA(p) · GW(p)

If I weren't interested in creating more new beings with positive lives I'd place greater priority on:

  • Ending the suffering and injustice suffered by animals in factory farming
  • Ending the suffering of animals in the wilderness
  • Slowing ageing, or cryonics (so the present generation can enjoy many times more positive value over the course of their lives)
  • Radical new ways to dramatically raise the welfare of the present generation (e.g. direct brain stimulation as described here)

I haven't thought much about what would look good from a conservative Christian worldview.

answer by technicalities · 2020-04-29T09:33:30.983Z · score: 13 (6 votes) · EA(p) · GW(p)

Welcome!

It's a common view. Some GiveWell staff hold this view, and indeed most of their work involves short-term effects, probably for epistemic reasons. Michael Plant has written about the EA implications [EA · GW] of person-affecting views, and emphasises improvements to world mental health.

Here's a back-of-the-envelope estimate [EA · GW] for why person-affecting views might still be bound to prioritise existential risk though (for the reason you give, but with some numbers for easier comparison).

Dominic Roser and I have also puzzled over [EA(p) · GW(p)] Christian longtermism a bit.

answer by trammell · 2020-04-29T12:54:53.754Z · score: 9 (7 votes) · EA(p) · GW(p)

This paper is also relevant to the EA implications of a variety of person-affecting views. https://globalprioritiesinstitute.org/wp-content/uploads/2020/Teruji_Thomas_asymmetry_uncertainty.pdf

comment by MichaelA · 2020-05-11T04:51:26.150Z · score: 1 (1 votes) · EA(p) · GW(p)

There’s also a talk version here: https://www.youtube.com/watch?v=DAavPa8j0lM

answer by evelynciara · 2020-04-29T07:56:37.645Z · score: 6 (4 votes) · EA(p) · GW(p)

If you think that embryos and fetuses have moral value, then abortion becomes a very important issue in terms of scale. However, it's not very neglected, and the evidence suggests that increased access to contraceptives, not restricted access to abortion services, is driving the decline in abortion rates in the U.S.

Designing medical technology to reduce miscarriages (which are spontaneous abortions) may be an especially important, neglected, and tractable way to prevent embryos/fetuses and parents from suffering. (10-50% of pregnancies end in miscarriages.)

comment by Larks · 2020-04-29T15:41:08.012Z · score: 6 (2 votes) · EA(p) · GW(p)
However, it's not very neglected, and the evidence suggests that increased access to contraceptives, not restricted access to abortion services, is driving the decline in abortion rates in the U.S.

The linked opinion piece asserts that abortion regulations are not responsible for the improvement, but doesn't seem to provide any evidence to back it up?

I am not that familiar with the literature, but it would seem prima facie rather implausible to me that making something illegal wouldn't help reduce its prevalence. If statistics suggest the US decline is being driven by other policies, I would guess this is because the restrictions that have been put in place are quite weak - abortion-for-convenience remains legal in all 50 states, and even a your state did impose some limitation, they cannot stop someone travelling to an unregulated state. However, a quick google suggests that some academic research does find that the restrictions that have been put in place have helped reduce the rate. Additionally, it seems that the number of abortions in Ireland has gone up significantly since their law change, even taking into account people travelling to the UK, so presumably reversing that change would help reduce the number. This also fits with my impression of what has happened in other many countries when they banned/unbanned abortion.

I totally agree that reducing miscarriage rates could be very interesting. Are you aware of any tractable interventions? I had a little look a few years ago but did not find anything very satisfactory.

comment by MichaelPlant · 2020-04-29T09:29:27.896Z · score: 6 (3 votes) · EA(p) · GW(p)

Plausibly, feotuses will not be morally relevant on such a view as they won't exist whatever we choose to do.

comment by Larks · 2020-04-29T15:51:40.193Z · score: 5 (3 votes) · EA(p) · GW(p)

It would be interesting if person-affecting arguments lead one to pass on reducing abortion, because while you care about currently existing babies, by the time any intervention you might support today will have any effect, they will have already been born or not, and hence too late to help. There will be a new cohort in need of help, of course, but you don't care about them until they're conceived, so won't be interested in working to help them now.

More generally, you would neglect any intervention that only affects people under the age of X if it will take longer than X years to implement the intervention.

However, if such an initiative was started by longtermists, person-affecting-view-ists might join it half way through. This suggests an interesting way for longtermists to leverage* the help of people with person-affecting views! (It is possible you might think it was immoral to exploit their temporal inconsistency in this way however).

comment by MichaelStJules · 2020-04-29T21:01:23.145Z · score: 2 (1 votes) · EA(p) · GW(p)

This is assuming that death isn't bad, though, right? In a sense, the fetus exists in the whole of the outcome, past, present and future together, regardless of what we do, and then it becomes a question of whether or not a longer life can be better on such an account for a fetus (and whether or not fetuses should count). New_EA did write:

I think the world ending would be bad because 7 billion people would die

EDIT: Ah, did you mean we'd always be too late? On a wide person-affecting view, the future ones could still matter.

comment by MichaelStJules · 2020-04-29T23:40:37.453Z · score: 2 (1 votes) · EA(p) · GW(p)
If you think that embryos and fetuses have moral value, then abortion becomes a very important issue in terms of scale.

This might not be the case if you have a narrow person-affecting view so that whether A or B is born doesn't matter, even if one would be substantially better off than the other (see my answer [EA(p) · GW(p)] on the nonidentity problem). In that case, the fetuses that don't yet exist (or those that won't exist until after some point) might not matter, because which ones would come to exist could be sensitive to your actions (think butterfly effect). Then, the scale of the problem is restricted to the fetuses whose identities are already determined, and you might be too late to help almost all of them.

Same conclusion with presentist views, so that only those that currently exist matter.

EDIT: Larks made the same point [EA(p) · GW(p)].

answer by MichaelStJules · 2020-04-29T23:22:23.810Z · score: 2 (1 votes) · EA(p) · GW(p)

80,000 Hours has a cause quiz, possibly a bit dated and sometimes a bit buggy (sometimes you see the rankings during the quiz, sometimes you only see them at the end, and sometimes there's an extra question).

Question 4 is particularly relevant fvor person-affecting views, but it might not get at your specific views, since there are many different kinds of person-affecting views:

Question 4: Here’s two scenarios:
A nuclear war kills 90% of the human population, but we rebuild and civilization eventually recovers.
A nuclear war kills 100% of the human population and no people live in the future.
How much worse is the second scenario?

Besides the causes listed there, there could also be mental health and pain relief, and since you think death is bad, cryonics and life extension.

Whether or not you think it's bad to bring absolutely miserable lives into existence (the asymmetry), that could have important consequences. If you do think it's bad, then the longterm future could matter a lot.

Your response to the nonidentity problem also matters. Essentially, do you think if either A or B will be born, and the value in (total quality of) their lives will be X and Y, respectively, with X < Y, does it matter to you whether A or B is born? Is this the same to you as whether A is born and lives with value X or Y? As an example, if a couple wants to have a child, but the mother has been infected with the Zika virus, considering only the effects on the child, should the couple wait to conceive until it's unlikely the child would be affected by Zika? If they wait, a different child will be born. If you don't think it matters whether A or B is born, regardless of X and Y (even if one or either would be miserable), then basically the longterm future shouldn't matter to you.

If you do think it's bad to bring bad lives into existence or that it matters whether A or B is born (considering only their interests), then the longterm future could still matter a lot, and assuming you do focus on the longterm future (you might still have empirical doubts) your focus would be on preventing s-risks or ensuring its quality is as good as possible, conditional on moral patients existing, but not ensuring moral patients exist for their own sake. See the link about s-risks, trammell's answer [EA(p) · GW(p)] about this paper, or the talk about that paper here.

answer by Rowan_Stanley · 2020-05-10T07:44:57.982Z · score: 1 (1 votes) · EA(p) · GW(p)

The Effective Altruism for Christians website and Facebook group might be a useful place to start, if you haven't come across those before.

I don't think they have developed problem profiles etc., but the people there may have a similar outlook to you and be able to point you to resources that are more relevant from a Christian and/or person-affecting perspective.

answer by Mati_Roy · 2020-05-01T02:49:21.717Z · score: 0 (2 votes) · EA(p) · GW(p)

Even if you're just 99% sure that Christianity is true, it might still make sense to focus on worlds where it's false given in the world where it's true we already have an aligned superintelligence, and are all immortal.

The book The Ethics of Cryonics: Is it Immoral to be Immortal? talks about cryopreserving all fetuses. Cryonics might also be the only way to bring people currently existing to a time when they can live rich and long lives.

answer by ofer · 2020-04-29T08:53:16.042Z · score: -1 (6 votes) · EA(p) · GW(p)

Hey there!

The universe/multiverse may be very large and (in the fullness of time) may contain a vast number of beings that we should care about and that we (and other civilizations similar to us) may be able to help in some way by using our cosmic endowment wisely. So person-affecting views seem to prescribe the standard maxipok strategy (see also The Precipice by Toby Ord).

[EDIT: by "we should care" I mean something like "we would care if we knew all the facts and had a lot of time to reflect".]

comment by MichaelPlant · 2020-04-29T09:59:19.933Z · score: 3 (2 votes) · EA(p) · GW(p)

I think you might not have clocked the OP's comment that the morally relevant being as just those that exist whatever we do, which would presumably rule out concerns for lives in the far future.*

*Pedantry: there could actually be future aliens who exist whatever we do now. Suppose some aliens will turn up on Earth in 1 million years and we've had no interaction with them. They will be 'necessary' from our perspective and thus the type of person-affecting view stated would conclude such people matter.**

**Further pedantry: if our actions changed their children, which they presumably would, it would just be the first generation of extraterrestrial visitors who mattered morally on this view.

comment by Carl_Shulman · 2020-04-29T17:38:13.857Z · score: 11 (8 votes) · EA(p) · GW(p)

It doesn't seem like mere pedantry if it requires substantial revision of the view to retain the same action recommendations. Symmetric person-affecting total utilitarianism does look to be dominated by these sorts of possibilities of large stocks of necessary beings without some other change. I'm curious what your take on the issues raised in that post is.

comment by ofer · 2020-04-29T21:09:47.973Z · score: 1 (1 votes) · EA(p) · GW(p)

I think you might not have clocked the OP's comment that the morally relevant being as just those that exist whatever we do, which would presumably rule out concerns for lives in the far future.*

What I tried to say is that the spacetime of the universe(s) may contain a vast number of sentient beings regardless of what we do. Therefore, achieving existential security and having something like a Long Reflection may allow us to help a vast number of sentient beings (including ones outside our future light cone).

**Further pedantry: if our actions changed their children, which they presumably would, it would just be the first generation of extraterrestrial visitors who mattered morally on this view.

I think we're not interpreting the person-affecting view described in the OP in the same way. The way I understand the view (and the OP is welcome to correct me if I'm wrong) it entails we ought to improve the well-being of the extraterrestrial visitors' children (regardless of whether our actions changed them / caused their existence).

comment by Mati_Roy · 2020-05-01T02:29:46.069Z · score: 2 (2 votes) · EA(p) · GW(p)

oh wow, this made me updated towards caring about people in the future even if the person-affecting view is true (because we might not change their existence if they are both in the future *and* in a far away location)

No comments

Comments sorted by top scores.