Posts

Project for Awesome 2022: Video sign-ups 2022-01-10T04:27:04.230Z
Liberty in North Korea, quick cost-effectiveness estimate 2021-11-03T02:29:17.264Z
Why does (any particular) AI safety work reduce s-risks more than it increases them? 2021-10-03T16:55:31.623Z
It takes 5 layers and 1000 artificial neurons to simulate a single biological neuron [Link] 2021-09-07T21:53:53.773Z
How much should we still worry about catching COVID? [Links and Discussion Thread] 2021-08-29T06:06:08.892Z
Welfare Footprint Project - a blueprint for quantifying animal pain 2021-06-26T20:05:10.191Z
Voting open for Project for Awesome 2021! 2021-02-12T02:22:20.339Z
Project for Awesome 2021: Video signup and resources 2021-01-31T01:57:59.188Z
Project for Awesome 2021: Early coordination 2021-01-27T19:11:00.600Z
Even Allocation Strategy under High Model Ambiguity 2020-12-31T09:10:09.048Z
[Summary] Impacts of Animal Well‐Being and Welfare Media on Meat Demand 2020-11-05T09:11:38.138Z
Hedging against deep and moral uncertainty 2020-09-12T23:44:02.379Z
Is it suffering or involuntary suffering that's bad, and when is it (involuntary) suffering? 2020-06-22T16:41:58.831Z
Physical theories of consciousness reduce to panpsychism 2020-05-07T05:04:39.502Z
Replaceability with differing priorities 2020-03-08T06:59:09.710Z
Biases in our estimates of Scale, Neglectedness and Solvability? 2020-02-24T18:39:13.760Z
[Link] Assessing and Respecting Sentience After Brexit 2020-02-19T07:19:32.545Z
Changes in conditions are a priori bad for average animal welfare 2020-02-09T22:22:21.856Z
Please take the Reducing Wild-Animal Suffering Community Survey! 2020-02-03T18:53:06.309Z
What are the challenges and problems with programming law-breaking constraints into AGI? 2020-02-02T20:53:04.259Z
Should and do EA orgs consider the comparative advantages of applicants in hiring decisions? 2020-01-11T19:09:00.931Z
Should animal advocates donate now or later? A few considerations and a request for more. 2019-11-13T07:30:50.554Z
MichaelStJules's Shortform 2019-10-24T06:08:48.038Z
Conditional interests, asymmetries and EA priorities 2019-10-21T06:13:04.041Z
What are the best arguments for an exclusively hedonistic view of value? 2019-10-19T04:11:23.702Z
Defending the Procreation Asymmetry with Conditional Interests 2019-10-13T18:49:15.586Z
Ex ante prioritarianism and negative-leaning utilitarianism do not override individual interests 2019-07-04T23:56:44.330Z

Comments

Comment by MichaelStJules on Suggestion: Effective Animal Advocacy forum · 2022-01-23T19:13:24.249Z · EA · GW

There should be a simple url that gives you the filtered front page for a given cause and keeps you in the cause if you browse normally, e.g. forum.effectiveanimaladvocacy.org.

Comment by MichaelStJules on Running for U.S. president as a high-impact career path · 2022-01-23T19:02:50.085Z · EA · GW

What about running both a Democrat and a Republican, and going for a bipartisan presidential ticket? This can be made clear from the start that they're going for a bipartisan ticket.

Comment by MichaelStJules on The Subjective Experience of Time: Welfare Implications · 2022-01-21T01:35:24.809Z · EA · GW

The section on cortical oscillations suggests cortical oscillations may track the subjective experience of time and that we could measure their frequencies across animals. It seems like it pretty directly puts forward a hypothesis that would support (or refute) the thesis, conditional on the hypothesis holding, with a citation to support the hypothesis. Your criticism on this point seems wrong to me.

Comment by MichaelStJules on The Subjective Experience of Time: Welfare Implications · 2022-01-21T01:24:31.829Z · EA · GW

Redrafting every time seems too demanding, you can't please everyone, and sometimes the reader is wrong. Ideally, this should be handled before publication. If a reviewer before publication gets this impression, then sure, redrafting can make sense. I'm not sure there should be any formal commitment to redrafting.

Comment by MichaelStJules on The Subjective Experience of Time: Welfare Implications · 2022-01-20T18:22:36.922Z · EA · GW

(Disclaimer: I work at Rethink Priorities, but I'm speaking only for myself.)

40% doesn't seem low to me, even if it means "probably inaccurate", since it could still have substantial impact on cost-effectiveness estimates and prioritization, if we use expected values. Maybe it would be better for the post to explain this (if it hasn't; it's been a while since I read it).

Generally, dismissing something just because it's probably inaccurate will tend to lead to worse outcomes in the long run. Plenty of events or possibilities with less than 50% probability matter a lot, because of the stakes. Most of the net benefits of insurance are for low probability events (you'd be better off just saving money for likely events), EA also does hits-based-giving, many (but not all) EAs working on extinction risk think extinction is actually unlikely any time soon.

For non-EA audiences, maybe it's better to explain this each time for low probability possibilities?

Comment by MichaelStJules on Why I prioritize moral circle expansion over artificial intelligence alignment · 2022-01-20T06:08:53.289Z · EA · GW

Do you think there's a better way to discuss biases that might push people to one cause or another? Or that we shouldn't talk about such potential biases at all?

What do you mean by this post discouraging cooperation?

What do you expect an invitation for critical discussion to look like? I usually take that to be basically implicit when something is posted to the EA Forum, unless the author states otherwise.

Comment by MichaelStJules on [linkpost] Peter Singer: The Hinge of History · 2022-01-20T05:29:33.165Z · EA · GW

You could have deontological commitments to prevent atrocities, too, but with an overriding commitment that you shouldn't actively commit an atrocity, even in order to prevent a greater one. Or, something like a harm-minimizing consequentialism with deontological constraints against actively committing atrocities.

Of course, you still have to prioritize and can make mistakes, which means some atrocities may go ignored, but I think this takes away the intuitive repugnance and moral blameworthiness.

Comment by MichaelStJules on Reducing long-term risks from malevolent actors · 2022-01-19T22:23:55.388Z · EA · GW

Could you expand on this? Are you suggesting that it could take attention away from solutions ("inspiring malevolent actors to contribute positively while keeping true to their beliefs") that would reduce x-risk more?

By "keeping true to their beliefs", do you mean their malevolent views? E.g. we want a non harmful or less harmful outlet for their malevolence? Or, just working with their beliefs without triggering their malevolence on a large scale? If the latter, what kinds of beliefs do you have in mind?

Comment by MichaelStJules on Personal Perspective: Is EA particularly promising? · 2022-01-19T22:14:03.165Z · EA · GW

The EA Mental Health Survey may have involved heavy self-selection for mental health issues, so I would be careful about giving it much weight as representative of the community.

Comment by MichaelStJules on [linkpost] Peter Singer: The Hinge of History · 2022-01-17T06:51:17.764Z · EA · GW

I think some moral views, e.g. some rights-based ones or ones with strong deontological constraints, would pretty necessarily disavow atrocities on principle, not just for fairly contingent reasons based on anticipated consequences like (act) utilitarians would. Some such views could also still rank issues.

I basically agree with the rest.

Comment by MichaelStJules on Why is Operations no longer an 80K Priority Path? · 2022-01-14T06:51:50.162Z · EA · GW

Maybe it's easier in effective animal advocacy, because there's a broader animal advocacy movement to draw from and some large animal advocacy orgs building talent? Also, EAs seem to disproportionately have STEM backgrounds and want to do research, but this is probably not the case for animal advocates in general, so the proportion of animal advocates with ops skills may be higher than for EAs.

Comment by MichaelStJules on EAA is relatively overinvesting in corporate welfare reforms · 2022-01-06T15:52:25.587Z · EA · GW

Another way of putting this is that these corporate welfare reforms are about half as good as preventing their births, or better, for their welfare. So, corporate welfare reforms over a region (a country, a US state, a province, the EU, a continent, the world, etc.) would be as good as cutting present and future factory farming in that region in half or better (in welfarist terms, ignoring other effects, assuming no new lower standard farms under the reform scenario, etc.).

Comment by MichaelStJules on Animal welfare EA and personal dietary options · 2022-01-06T06:05:15.041Z · EA · GW

For comparison, from Rethink Priorities' 2019 EA Community Survey:

Comment by MichaelStJules on Animal welfare EA and personal dietary options · 2022-01-06T06:03:17.372Z · EA · GW

I would expect "Anything goes. A normal meat-eating diet, optimized only for health and convenience." to be a reducetarian diet in practice compared to the average diet for most people in most developed countries, since I think the average person eats more animal products and too few plants than is optimal. I would guess that for most people, a roughly optimal diet for health and convenience could be found within the group "vegan except for dairy, bivalves and when it would be inconvenient", which also falls under Approximate veg*nism in places where most restaurants have decent veg options with protein or if you rarely eat food from restaurants, although there are other considerations. The main exceptions would be when getting food from a restaurant with no good veg options (or when in a group looking to order from such a restaurant, and you would otherwise try to persuade them to get food elsewhere).

When considering only the more direct effects on farmed animals and wild animals, the only animal products I would recommend replacing with plant-based foods would be from herbivorous* factory farmed animals roughly the size of turkeys or smaller, other than bivalves and similarly unlikely to be conscious animals, so

  1. Factory farmed poultry (chickens, turkeys, ducks) and their eggs
  2. Herbivorous farmed fish
  3. Herbivorous farmed invertebrates, other than bivalves and others with similar or lower expected moral weight
  4. Herbivorous farmed amphibians and reptiles, although relatively few people eat them often, anyway

This assumes their lives are bad on average, which is my expectation.

*What I mean by "herbivorous" here is that their diets while farmed are almost exclusively plant-based. Also, I would add if their diets are herbivorous except for animal products from farmed animals, themselves on the above list, and maybe extend this further.

I'm personally bivalvegan, i.e. vegan except for bivalves, and plan to stick with this.

Comment by MichaelStJules on Animal welfare EA and personal dietary options · 2022-01-06T02:21:00.030Z · EA · GW

Based on Welfare Footprint Project:

Over the 1.5-2 years of their lives (including transportation and slaughter), conventional egg-laying hens are estimated to spend:

  1. 431 hours=18 days, or ~2.5% of their lives, with Disabling pain
  2. 4054 hours=169 days, or 23% of their lives, with Hurtful pain

Over the 45-50* days of their lives (ignoring transportation and slaughter), conventional chickens raised for meat (broilers) are estimated to spend:

  1. 51 hours=2.1 days, or ~4.5% of their lives, with Disabling pain
  2. 297 hours=12.4 days, or ~26% of their lives, with Hurtful pain

 

The above estimates:

  1. Allow pains from different sources to overlap in time and sums their durations even if they overlap, so the actual durations could be shorter. I would guess Hurtful pain would be ignored when also experiencing Disabling pain (from another source).
  2. Assume the chickens are not in pain while they sleep. From a quick Google search, chickens sleep about 8 hours a day, so they spend 1/3 of their time sleeping. I don't know how much WFP assumed they sleep.

 

My own guess would be that under a symmetric ethical view like classical utilitarianism, each of the Disabling pain or the Hurtful pain alone would outweigh the good in these chickens' lives in expectation, and both together would very likely outweigh the good, since

  1. It seems like these chickens spend the equivalent of ~1/3 of their waking hours with Hurtful pain. (Shorter since pains from multiple sources may be experienced at the same time but their durations are added. Still, multiple pains at the same time are probably together worse than any of them alone at a time, per second.)
  2. At best, their goods will be as good (per second) as Hurtful pain can be bad, and they won't be experienced enough of the time to make up for the Hurtful pain.
  3. I'd guess Disabling pain is at least ~10x ~5x as bad as Hurtful pain on average per second, so the Disabling pain in their lives probably contributes about as much or more at least half as much bad overall as the Hurtful pain. (EDIT: adjusted from at least ~10x to at least ~5x)

 

* The typical broiler lives 40-45 days, but WFP added the pain from 1/140th of the average female broiler breeder, who lives 1-2 years and produces ~140 chickens for meat. 2 years/140=5.2 days.

Comment by MichaelStJules on Animal welfare EA and personal dietary options · 2022-01-06T00:51:58.928Z · EA · GW

Welfare Footprint Project has, in my view, the best analysis of the (physical and psychological) pain farmed chickens go through on average under conventional factory farm conditions (and with specific welfare improvements). 

Here are the pages for:

  1. Egg-laying hens.
  2. Chickens raised for meat (excluding transportation and slaughter, which seems very bad in with live shackle slaughter, with birds' bones frequently broken; WFP is also looking into slaughter).

They don't cover "goods" in their lives, but you could come up with estimates/ranges for these based on life expectancies and the kinds of goods you might expect, their durations and frequencies. Their life expectancies are:

  1. 1.5-2 years for egg-laying hens.
  2. 40-60 days for chickens raised for meat (other than broiler breeders, who live 1.5-2 years and are probably chronically hungry with conventional breeds).

They define 4 categories of intensities of pain: Annoying, Hurtful, Disabling and Excruciating. To summarize the definitions from this link:

  1. Annoying pain can be ignored most of the time and results in little behavioural change.
  2. Hurtful pain prevents engagement in positive activities with no immediate benefits like play and dustbathing, and animals with Hurtful pain would be aware of this pain most of the time, although can focus on other things and sometimes ignore it.
  3. Disabling pain can't be ignored and is continuously distressing.
  4. Excruciating pain is not tolerable. It can lead to extremely reckless behaviour to end it, e.g. exposure to predators, loud vocalizations. Things like scalding and severe burning.

Assuming symmetric ethical views, I would guess all of their pleasures (and other goods) would be at most as good as "Hurtful pain" can be bad, because "Disabling pain", by definition, can't be ignored and prevents enjoyment/positive welfare (of the kind they typically experience, presumably), and intuitively the kinds of enjoyment/positive welfare they can experience in factory farm conditions do not seem intense to me.

The kinds of pleasures I might expect in chickens would be from eating, comfort/rest and maybe just interest in things happening around them. Given the environments in conventional factory farms, I don't expect play, mating or parenting. Maybe chickens farmed for meat dustbathe (egg-laying hens in conventional cages wouldn't). I don't know if they form positive social bonds in these environments, but they might cuddle or preen/groom each other. I don't know if preening/grooming is enjoyable for chickens (I haven't looked into this). Maybe they can imagine things (visually or otherwise), and derive enjoyment from that. Maybe they enjoy some of their dreams, but presumably their dreams could be bad, too.

Comment by MichaelStJules on EA megaprojects continued · 2022-01-01T00:15:19.981Z · EA · GW

The difference is not that big (154/125=1.232), so it could be unrelated to the quality or format, and instead have to do with timing or other things.

One big difference is the inclusion of several examples in the post itself here, and credit for that should go to the authors, whereas users may give most of the credit for the examples in your question to the corresponding answers, not the question itself. If someone wanted to upvote specific examples in this post, they'd upvote this entire post, whereas for the question, they could upvote specific answers instead (or too). If you include karma from the question itself and the answers, there's far more in your question, although probably substantial double counting from users upvoting the post and answers.

Comment by MichaelStJules on Convergence thesis between longtermism and neartermism · 2021-12-31T19:32:02.667Z · EA · GW

The higher you think the bar is the more likely it is that longtermist things and neartermist things will converge. At the very top they will almost certainly converge as you are stuck doing mostly things that can be justified with RCTs or similar levels of evidence.

I'm not sure anything would be fully justified with RCTs or similar levels of evidence, since most effects can't be measured in practice (especially far future effects), so we're left using weaker evidence for most effects or just ignoring them.

Comment by MichaelStJules on Convergence thesis between longtermism and neartermism · 2021-12-31T19:04:28.728Z · EA · GW

9. Experts and common sense suggests that it is plausible that the best thing you can do for the long term is to make the short term go well

It is not unusual to hear people say that the best thing you can do for the long term is to make the short term good. This seems a reasonable common sense view.

Even people who are trusted and considered experts with the EA community express this view. For example here Peter Singer suggest that “If we are at the hinge of history, enabling people to escape poverty and get an education is as likely to move things in the right direction as almost anything else we might do; and if we are not at that critical point, it will have been a good thing to do anyway” (source)

It's unfortunate that Singer didn't expand more on this, since we're left to speculate, and my initial reaction is that this is false, and on a more careful reading, probably misleading.

  1. How is he imagining reducing poverty and increasing education moving things in the right direction? Does it lead to more fairly distributed influence over the future which has good effects, and/or a wider moral circle? Is he talking about compounding wealth/growth? Does it mean more people are likely to contribute to technological solutions? But what about accelerating technological risks?
  2. Does he think "enabling people to escape poverty and get an education" moves things in the right direction as much as almost anything else in expectation, in case we have both likelihood and "distance" to consider?
  3. Maybe "enabling people to escape poverty and get an education is as likely to move things in the right direction as almost anything else we might do", but "almost anything else" could leave a relatively small share of interventions that we can reliably identify as doing much better for the long term (and without also picking things that backfire overall).
  4. Is he just expressing skepticism that longtermist interventions actually reliably move things in the right direction at all without backfiring, e.g. due to cluelessness/deep uncertainty? I'm most sympathetic to this, but if this is what he meant, he should have said so.

Also, I don't think Singer is an expert in longtermist thinking and longtermist interventions, and I have not seen him engage a lot with longtermism. I could be wrong. Of course, that may be because he's skeptical of longtermism, possibly justifiably so.

Comment by MichaelStJules on "Disappointing Futures" Might Be As Important As Existential Risks · 2021-12-31T01:00:26.733Z · EA · GW

How many different plausible definitions of flourishing that differ significantly enough do you expect there to be?

One potential solution would be to divide the future spacetime (not necessarily into contiguous blocks) in proportion to our credences in them (or evenly), and optimize separately for the corresponding view in each. With equal weights, each of n views could get at least about 1/n of what it would if it had 100% weight (taking ratios of expected values), assuming there isn't costly conflict between the views and no view (significantly) negatively values what another finds near optimal in practice. They could potentially do much better with some moral trades and/or if there's enough overlap in what they value positively. One view going for a larger share would lead to zero sum work and deadweight loss as others respond to it.

I would indeed guess that a complex theory of flourishing ("complexity of value", objective list theories, maybe), a preference/desire view and hedonism would assign <1% value to each other's (practical) optima compared to their own. I think there could be substantial agreement between different complex theories of flourishing, though, since I expect them generally to overlap a lot in their requirements. I could also see hedonism and preference views overlapping considerably and having good moral trades, in case most of the resource usage is just to sustain consciousness (and not to instantiate preference satisfaction or pleasure in particular) and most of the resulting consciousness-sustaining structures/activity can shared without much loss on either view. However, this could just be false.

Comment by MichaelStJules on Democratising Risk - or how EA deals with critics · 2021-12-30T01:42:33.874Z · EA · GW

Also, I think we should be clear about what kinds of serious harms would in principle be justified on a rights-based (or contractualist) view. Harming people who are innocent or not threats seems likely to violate rights and be impermissible on rights-based (and contractualist) views. This seems likely to apply to massive global surveillance and bombing civilian-populated regions, unless you can argue on such views that each person being surveilled or bombed is sufficiently a threat and harming innocent threats is permissible, or that collateral damage to innocent non-threats is permissible. I would guess statistical arguments about the probability of a random person being a threat are based on interpretations of these views that the people holding them would reject, or that the probability for each person being a threat would be too low to justify the harm to that person.

So, what kinds of objectionable harms could be justified on such views? I don't think most people would qualify as serious enough threats to justify harm to them to protect others, especially people in the far future.

Comment by MichaelStJules on Democratising Risk - or how EA deals with critics · 2021-12-30T00:49:42.730Z · EA · GW

I realize now I interpreted "rights" in moral terms (e.g. deontological terms), when Halstead may have intended it to be interpreted legally. On some rights-based (or contractualist) views, some acts that violate humans' legal rights to protect nonhuman animals or future people could be morally permissible.

The longtermist could then argue that an analogous argument applies to "other-defence" of future generations.

I agree. I think rights-based (and contractualist) views are usually person-affecting, so while they could in principle endorse coercive action to prevent the violation of rights of future people, preventing someone's birth would not violate that then non-existent person's rights, and this is an important distinction to make. Involuntary extinction would plausibly violate many people's rights, but rights-based (and contractualist) views tend to be anti-aggregative (or at least limit aggregation), so while preventing extinction could be good on such views, it's not clear it would deserve the kind of priority it gets in EA. See this paper, for example, which I got from one of Torres' articles and takes a contractualist approach. I think a rights-based approach could treat it similarly.

It could also be the case that procreation violates the rights of future people pretty generally in practice, and then causing involuntary extinction might not violate rights at all in principle, but I don't get the impression that this view is common among deontologists and contractualists or people who adopt some deontological or contractualist elements in their views. I don't know how they would normally respond to this.

Considering "innocent threats" complicates things further, too, and it looks like there's disagreement over the permissibility of harming innocent threats to prevent harm caused by them.

Separately, note that a similar objection also applies to many forms of non-totalist longtermism. On broad person-affecting views, for instance, the future likely contains an enormous number of future moral patients who will suffer greatly unless we do something about it. So these views could also be objected to on the grounds that they might lead people to cause serious harm in an attempt to prevent that suffering.

I agree. However, again, on some non-consequentialist views, some coercive acts could be prohibited in some contexts, and when they are not, they would not necessarily violate rights at all. The original objection raised by Halstead concerns rights violations, not merely causing serious harm to prevent another (possibly greater) harm. Maybe this is a sneaky way to dodge the objection, and doesn't really dodge it at all, since there's a similar objection. Also, it depends on what's meant by "rights".

Comment by MichaelStJules on Democratising Risk - or how EA deals with critics · 2021-12-29T22:35:07.921Z · EA · GW

Depending on the view, legitimate self-defence and "other-defence" don't violate rights at all, and this seems close to common sense when applied to protect humans. Even deontological views could in principle endorse - but I think in practice today should condemn - coercively preventing individuals from harming nonhuman animals, including farmed animals, as argued in this paper, published in the Journal of Controversial Ideas, a journal led and edited by McMahan, Minerva and Singer. Of course, this conflicts with the views of most humans today, who don't extend similarly weighty rights/claims to nonhuman animals.

EDIT: I realize now I interpreted "rights" in moral terms (e.g. deontological terms), when you may have intended it to be interpreted legally.

Comment by MichaelStJules on Despite billions of extra funding, small donors can still have a significant impact · 2021-12-26T18:40:47.500Z · EA · GW

But if you were to donate $1,000 to CHAI, then either:

1. You expand CHAI’s available funding by $1,000. The cost-effectiveness of this grant should be basically the same as the final $1,000 that Open Philanthropy donated.

If Open Phil's judgement is good enough, and Open Phil was not holding back because they believed CHAI marginal cost-effectiveness would drop below that of their marginal grantee, and instead for some other reason(s), e.g. donor coordination, the public support test, reducing dependence on Open Phil, then wouldn't this actually normally beat their final $1,000? So, in expectation, we can plausibly beat Open Phil's final $1,000 by topping up their grantees (assuming case 2 goes through well enough).

If Open Phil makes individual donor recommendations (and in the past, they've written why they haven't fully funded a given opportunity), then we can just follow those. It looks like they haven't been recommending the most well-known large EA organizations at all, though. Does Open Phil think they're fully funding these organizations (anticipating what those orgs will raise through other means)? If so, we should perhaps expect to be in case 2 almost all of the time.

Comment by MichaelStJules on EA Infrastructure Fund: May–August 2021 grant recommendations · 2021-12-26T09:45:27.966Z · EA · GW

I'd guess that individual donors planning to support the EAIF should top up Open Phil grantees, either with or instead of donating to the EAIF.

Based on the argument here by Ben Todd, I think individual donors who donate to Open Phil grantees do at least as good as both Open Phil and the EAIF, if all of the following assumptions hold:

  1. Open Phil's judgement is sufficiently good,
  2. The reason Open Phil doesn't fund EAIF grantees directly is because Open Phil believes them to be less cost-effective on the margin than its own direct grantees, so, at least,
    1. not just due to legal reasons,
    2. not just due to falling outside the scope of all of their grantmaking areas,
    3. not just because Open Phil tends to avoid small grants of the size the EAIF makes, and
    4. not just because Open Phil was unaware of them, and
    5. not due to any combination of the above.
  3. Open Phil's grant sizes are sufficiently sensitive in the right way to how much funding its grantees get.

Basically, under these assumptions, Open Phil grantees are either most cost-effective on the margin and in expectation, or Open Phil responds by granting less to those grantees and spending that funding on the next best marginal opportunities, which may include the EAIF itself. If Open Phil thought another dollar to EAIF would have been better than the last dollar to one of its other grantees, it would have granted that dollar to the EAIF instead, and vice versa. The "vice versa", that EAIF is at least as good on the margin as Open Phil's last dollar to its other grantees, seems to contradict the conjunction of 1 (Open Phil's judgement) and 2 (Open Phil believes EAIF grantees are less cost-effective than Open Phil grantees).

So, under these same assumptions, individual donors who donate to the EAIF risk doing worse than Open Phil, since the EAIF has ruled out contributing to opportunities that are better in expectation than their actual EAIF grantees.

 

All of the above assumptions seem plausibly wrong. That being said, Open Phil could in principle avoid subpoints 2.1, 2.2 and 2.3 by just deferring more of its grantmaking to the EAIF.

Note that this doesn't require the EAIF managers to have worse judgement than Open Phil. The point is that Open Phil starts with the most promising opportunities it can, and the EAIF does not. You could think of Open Phil and the EAIF as one organization, and you can do better by topping up the best opportunities for which they left room for other donors than by adding to the marginal opportunities.

Comment by MichaelStJules on Can/should we automate most human decisions, pre-AGI? · 2021-12-26T07:53:58.748Z · EA · GW

Couldn't automating most human decisions before AGI make AGI catastrophes more likely when AGI does come? We'll trust AI more and would be more likely to use it in more applications, or give it more options to break through.

Or, maybe with more work with pre-AGI AI, we'll trust AI less and work harder on security, which could reduce AI risk overall?

Comment by MichaelStJules on EA Infrastructure Fund: May–August 2021 grant recommendations · 2021-12-26T05:09:14.485Z · EA · GW
  • Due to an increase in the number of high-quality applications, we believe that grants from Open Philanthropy will be crucial for making sure that all sufficiently impactful projects in the EA infrastructure space can be funded. However, if grantseekers received funding from both Open Philanthropy and the EAIF, this could result in a total grant larger than deliberately chosen by Open Philanthropy. It could also duplicate effort between the funders, and grants to larger organizations tend to be outside our wheelhouse.
    • On the other hand, grantees with large budgets would ideally be supported by multiple funders, each contributing roughly ‘their fair share’.
  • Our tentative policy for responding to this challenge is to adopt a heuristic: by default, the EAIF will not fund organizations that are Open Philanthropy grantees and that plan to apply for renewed funding from Open Philanthropy in the future.
    • However, we will consider exceptions: if your organization is an Open Philanthropy grantee, please explain in your EAIF application why your funding request can’t be covered by past or future Open Philanthropy grants. Valid reasons can include unanticipated and time-sensitive opportunities that require a small-to-medium grant with a fast turnaround, or funding requests restricted to a different purpose than the activities supported by Open Philanthropy.

It seems like it would be better to decide this more on an individual basis (beyond the exceptions), depending on the exact reasons why Open Phil didn't fund them further, which you could ask them for (assuming this doesn't take too much of everyone's time). Besides only wanting to contribute 'their fair share' (donor coordination), they may also want to reduce (direct) dependence on Open Phil and have others vet these opportunities semi-independently. The organizations for which those were the only reasons Open Phil didn't fund them more are plausibly the best ones to donate marginal funds to even after Open Phil grants, and ruling them out could mean individual donors can do better by donating to them than to the EAIF. Of course, Open Phil also might not be aware of many of EAIF's grantees at all (or able to donate to them for various reasons), or Open Phil could make wrong decisions to not fund EAIF grantees it was aware of, so EAIF could therefore beat Open Phil by funding them.

 

For individuals supported by Open Philanthropy, e.g. through their Technology Policy Fellowship, AI Fellowship, or Early-Career Funding, there is no change. They continue to remain eligible for EAIF funding as before.

Is the different treatment here primarily because "grants to larger organizations tend to be outside our wheelhouse"? It seems like Open Phil should be less hesitant to fully fund these, because

  1. small EA donors are less likely to notice these opportunities,
  2. there's no public support test to maintain charitable status (it's not a charity at all), and
  3. leaving room for more funding here so that others have to re-vet many small opportunities is less efficient than having others re-vet fewer large opportunities.
Comment by MichaelStJules on EA Infrastructure Fund: May–August 2021 grant recommendations · 2021-12-25T23:52:34.376Z · EA · GW

I think it would have been a bit neater, from a funder perspective, if the longtermist/animals/welfare-specific parts would have been funded instead by those respective funds. I feel pretty mixed about having them here, because I'd expect it to make donations less promising for donors of any of the three preferences/beliefs.

 

+1, although I can see some as pretty borderline, e.g. a seminar or course on longtermism or another cause is definitely still community building, and can bring in more community builders who might do broader EA community building. Brian Tan, Shen Javier, and AJ Sunglao ($11,000) is cause-specific (mental health), but doesn't really fit in the other funds (not that you've suggested they don't fit here). Funding work that supports multiple groups or unaffiliated individuals within an area that falls entirely under the scope of a single fund seems borderline, too.

Comment by MichaelStJules on EA Infrastructure Fund: May–August 2021 grant recommendations · 2021-12-25T08:26:36.962Z · EA · GW

I agree about the laptop price, but I think external monitors should only cost about $200 each, from a quick Google search. Seems like it should have been <$3K total.

Comment by MichaelStJules on Reasons and Persons: Watch theories eat themselves · 2021-12-25T04:58:04.747Z · EA · GW

At the end of the link, since this post doesn't cover the whole book:

So that’s part one of the book: Parfit breaks morality. In part two he will wield time for even more breakage, with gambits like “if time is an illusion, then…” Part three is the good stuff—breaking the idea of personal identity (coming soon).

Comment by MichaelStJules on Response to Recent Criticisms of Longtermism · 2021-12-24T22:12:28.462Z · EA · GW

Ok, I don't find this particularly useful to discuss further, but I think your interpretations of his words are pretty uncharitable here. He could have been clearer/more explicit, and this could prevent misinterpretation, including by the wider audience of people reading his essays.

EDIT: Having read more of his post on LW, it does often seem like either he thinks longtermists are committed to assigning positive value to the creation of new people, or that this is just the kind of longtermism he takes issue with, and it's not always clear which, although I would still lean towards the second interpretation, given everything he wrote.

In one of the articles, he claims that longtermism can be "analys[ed]" (i.e. logically entails) "a moral view closely associated with what philosophers call 'total utilitarianism'."

This seems overly literal, and conflicts with other things he wrote (which I've quoted previously, and also in the new post on LW).

" And in his reply to Avital, he writes that "an integral component" of the type of longtermism that he criticized in that article is "total impersonalist utilitarianism".

He wrote:

As for the qualifier, I later make the case that an integral component of the sort of longtermism that arises from Bostrom (et al.)’s view is the deeply alienating moral theory of total impersonalist utilitarianism.

That means he's criticizing a specific sort of longtermism, not the minimal abstract longtermist view, so this does not mean he's claiming longtermism is committed to total utilitarianism. He also wrote:

Second, it does not matter much whether Bostrom is a consequentialist; I am, once again, criticizing the positions articulated by Bostrom and others, and these positions have important similarities with forms of consequentialism like total impersonalist utilitarianism.

Again, if he thought longtermism was literally committed to consequentialism or total utilitarianism, he would have said so here, rather than speaking about specific positions and merely pointing out similarities.

He also wrote:

Indeed, I would refer to myself as a "longtermist," but not the sort that could provide reasons to nuke Germany (as in the excellent example given by Olle Häggström), reasons based on claims made by, e.g., Bostrom.

Given that he seems to have person-affecting views, this means he does not think longtermism is committed to totalism/impersonalism or similar views.

 

So it looks like the only role the "closely" qualifier plays is to note that the type of total utilitarianism to which he believes longtermism is committed is impersonalist in nature.

Total utilitarianism is already impersonalist, from my understanding, so to assume by "moral view closely associated with what philosophers call 'total utilitarianism'", he meant "total impersonalist utilitarianism", I think you have to assume he didn't realize (or didn't think) total utilitarianism and total impersonalist utilitarianism are the same view. My guess is that he only added the "impersonalist" to emphasize the fact that the theory is impersonalist.

Comment by MichaelStJules on Response to Recent Criticisms of Longtermism · 2021-12-24T21:30:35.937Z · EA · GW

The link isn't working for me.

Comment by MichaelStJules on Response to Recent Criticisms of Longtermism · 2021-12-24T09:56:38.363Z · EA · GW

a series of articles in which the author assumes such a commitment.

 

As I mentioned in a top-level comment on this post, I don't think this is actually true. He never claims so outright. The Current Affairs piece doesn't use the word "utilitarian" at all, and just refers to totalist arguments made for longtermism, which are some of the most common ones. His wording from the Aeon piece, which I've bolded here to emphasize, also suggests otherwise:

To understand the argument, let’s first unpack what longtermists mean by our ‘longterm potential’, an expression that I have so far used without defining. We can analyse this concept into three main components: transhumanism, space expansionism, and a moral view closely associated with what philosophers call ‘total utilitarianism’.

I don't think he would have written "closely associated" if he thought longtermism and longtermists were necessarily committed to total utilitarianism.

This leads to the third component: total utilitarianism, which I will refer to as ‘utilitarianism’ for short. Although some longtermists insist that they aren’t utilitarians, we should right away note that this is mostly a smoke-and-mirrors act to deflect criticisms that longtermism – and, more generally, the effective altruism (EA) movement from which it emerged – is nothing more than utilitarianism repackaged. The fact is that the EA movement is deeply utilitarian, at least in practice, and indeed, before it decided upon a name, the movement’s early members, including Ord, seriously considered calling it the ‘effective utilitarian community’.

The "utilitarianism repackaged" article explicitly distinguishes EA and utilitarianism, but points out what they share, and argues that criticisms of EA based on criticisms of utilitarianism are therefore fair because of what they share. Similarly, Dr. David Mathers never actually claimed longtermism is committed total utilitarian, he only extended a critique of total utilitarianism to longtermism, which responds to one of the main arguments made for longtermism.

Longtermism is also not just the ethical view that some of the primary determinants of what we should do are the consequences on the far future (or similar). It's defended in certain ways (often totalist arguments), it has an associated community and practice, and identifying as a longtermist means associating with those, too, and possibly promoting them. The community and practice are shaped largely by totalist (or similar) views. Extending critiques of total utilitarianism to longtermism seems fair to me, even if they don't generalize to all longtermist views.

Comment by MichaelStJules on Response to Recent Criticisms of Longtermism · 2021-12-23T23:02:32.666Z · EA · GW

First, longtermism is not committed to total utilitarianism.

I think this is not a very good way to dismiss the objection, given the views actual longtermists hold and how longtermism looks in practice today (a point Torres makes). I expect that most longtermists prioritize reducing extinction risks, and the most popular defences I'm aware of in the community relate to lost potential, the terminal value from those who would otherwise exist, whether or not it's aggregated linearly as in the total view. If someone prioritizes reducing extinction risk primarily because of the deaths in an extinction event, then they aren't doing it primarily because of a longtermist view; they just happen to share a priority. I think that pretty much leaves the remaining longtermist defences of extinction risk reduction as a) our descendants' potential to help others (e.g. cosmic rescue missions), and b) replacing other populations who would be worse off, but then it's not obvious reducing extinction risks is the best way to accomplish these things, especially without doing more harm than good overall, given the possibility of s-risks, incidental or agential (especially via conflict).

The critique 'it's just obviously more important to save a life than to bring a new one into existence' applies to extinction risk-focused longtermism pretty generally, I think, with some exceptions. Of course, the critique doesn't apply to all longtermist views, all extinction risk-focused views, or even necessarily the views of longtermists who happen to focus on reducing extinction risk (or work that happens to reduce extinction risk).

 

Second, population ethics is notoriously difficult, and all views have extremely counterintuitive implications. To assess the plausibility of total utilitarianism—to which longtermism is not committed—, you need to do the hard work of engaging with the relevant literature and arguments. Epithets like "genocidal" and "white supremacist" are not a good substitute for that engagement. [EDIT: I hope it was clear that by "you", I didn't mean "you, Dr Mathers".]

This is fair, although Torres did also in fact engage with the literature a little, but only to support his criticism of longtermism and total utilitarianism, and he didn't engage with criticisms of other views, so it's not at all a fair representation of the debate.

 

If you think you have valid objections to longtermism, I would be interested in reading about them. But I'd encourage you to write a separate post or "shortform" comment, rather than continuing the discussion here, unless they are directly related to the content of the articles to which Avital was responding.

I think his comment is directly related to the content of the articles and the OP here, which discuss total utilitarianism, and the critique he's raising is one of the main critiques in one of Torres' pieces. I think this is a good place for this kind of discussion, although a separate post might be good, too, to get into the weeds.

Comment by MichaelStJules on EA Internships Board Now Live! · 2021-12-20T21:07:58.936Z · EA · GW

Awesome!

Some other organizations worth sharing on your website and/or getting their job board entries from are:

Comment by MichaelStJules on Why do you find the Repugnant Conclusion repugnant? · 2021-12-20T19:41:54.104Z · EA · GW

(Another answer...)

In humans, fertility rates have been declining while average quality of life has been increasing. Considering only human life until now, the RC might suggest things would have been better had fertility rates and average quality of life remained constant, since we'd have far more people with lives worth living. It can undermine the story of human progress, and suggest past trajectories would have been better.

We could also ask whether lifting people out of poverty is good, in case it would lead to lower populations. In general, as incomes increase, people have more access to contraceptives and other family planning services, even if we aren't directly funding such things. (Life-saving interventions would likely not lead to lower populations than otherwise, and would likely lead to higher ones at least in some places, according to research by David Roodman for GiveWell (GiveWell blog post).)

From https://ourworldindata.org/future-population-growth

https://en.wikipedia.org/wiki/List_of_countries_by_population_growth_rate

https://en.wikipedia.org/wiki/List_of_sovereign_states_and_dependencies_by_total_fertility_rate

Comment by MichaelStJules on Against Negative Utilitarianism · 2021-12-20T03:59:57.078Z · EA · GW

I was sympathetic to views like rank-discounted (negative) utilitarianism, but not since seeing the paper on the convergence with egoism, and I haven't found a satisfactory way around it. Tentatively, I lean towards negative prioritarianism/utilitarianism or negative lexical threshold prioritarianism/utilitarianism (but still strictly negative, so no positive welfare), or something similar, maybe with some preference-affecting elements.

Comment by MichaelStJules on Against Negative Utilitarianism · 2021-12-19T23:47:47.410Z · EA · GW

The Egyptology objection can be avoided by applying the view only to current and future (including potential) people, or only to people otherwise affected by your choices. Doing the latter can also avoid objections based on far away populations living at the same time or in the future, too, and reduce (but maybe not eliminate) the convergence to egoism and maximin. However, I think that would also require giving up the independence of irrelevant alternatives (like person-affecting views often do), so that which of two options is best can depend on what other options are available. For what it's worth, I don't find this counterintuitive.

if it has lexicality everywhere that seems especially counterintuitive--if I understand this every single type of suffering can't be outweighed by large amounts of smaller amounts of suffering. 

It seems intuitive to me at least for sufficiently distant welfare levels, although it's a bit weird for very similar welfare levels. If welfare were discrete, and the gaps between welfare levels were large enough (which seems probably false), then this wouldn't be weird to me at all.

Comment by MichaelStJules on Why do you find the Repugnant Conclusion repugnant? · 2021-12-19T21:45:29.077Z · EA · GW

I thought my first answer already did what you're asking for, and it has (right now) the most upvotes, which may reflect endorsement. Are you looking for something more concrete or that isn't tied to people who would exist anyway being worse off? I added another answer.

The ways to avoid the RC, AFAIK, should fall under at least one of the following, and so intuitions/thought experiments should match:

  1. Have some kind of threshold (a critical level, a sufficientarian threshold or a lexical threshold), and marginally good lives fall below it while the very good lives are above. It could be a "vague" threshold.
  2. Non-additive (possibly aggregating in some other way, e.g. with decreasing marginal returns to additional people, average utilitarian, maximin or softer versions like rank-discounted utilitarianism which strong prioritize the worst off, or strongly prioritizing better lives, like geometrism).
  3. Person-affecting.
  4. Carry in other assumptions/values and appeal to them, e.g. more overall bad in the larger population.

See also:

https://plato.stanford.edu/entries/repugnant-conclusion/#EigWayDeaRepCon

Comment by MichaelStJules on Why do you find the Repugnant Conclusion repugnant? · 2021-12-19T20:48:53.345Z · EA · GW

I think the killing would probably explain the intuitive repugnance of RC2 most of the time, though.

Comment by MichaelStJules on Why do you find the Repugnant Conclusion repugnant? · 2021-12-19T19:11:10.386Z · EA · GW

Adding another answer, although I think it's basically pretty similar to my first.

I can imagine myself behind a veil of ignorance, comparing the two populations, even on a small scale, e.g. 2 vs 3 people. In the smaller population with higher average welfare, compared to the larger one with lower average welfare, I imagine myself either

  1. as having higher welfare and finding that better, or
  2. never existing at all and not caring about that fact, because I wouldn't be around to ever care.

So, overall, the smaller population seems better.

 

I can make it more concrete, too: optimal family size. A small-scale RC could imply that the optimal family size is larger than the parents and older siblings would prefer (ignoring indirect concerns), and so the parents should have another child even if it means they and their existing children would be worse off and would regret it. That seems wrong to me, because if those extra children are not born, they won't be wronged/worse off, but others will be worse off than otherwise.

In the long run, everyone would become contingent people, too, but then you can apply the same kind of veil of ignorance intuition pump. People can still think a world where family sizes are smaller would have been better, even if they know they wouldn't have personally existed, since they imagine themselves either

  1. as someone else (a "counterpart") in that other world, and being better off, or
  2. not existing at all (as an "extra" person) in their own world, which doesn't bother them, since they wouldn't have ever been around in the other world to be bothered.

Naively, at least, this seems to have illiberal implications for contraceptives, abortion, etc..

 

There's also an average utilitarian veil of ignorance intuition pump: imagine yourself as a random person in each of the possible worlds, and notice that your welfare would be higher in expectation in the world with fewer people, and that seems better. (I personally distrust this intuition pump, since average utilitarianism has other implications that seem very wrong to me.)

Comment by MichaelStJules on Against Negative Utilitarianism · 2021-12-19T17:59:22.510Z · EA · GW

Not in the specific example I'm thinking of, because I'm imaging either the 's happening or the 's happening, but not both (and ignoring other unaffected utilities, but the argument is basically the same if you count them).

Comment by MichaelStJules on Not all x-risk is the same: implications of non-human-descendants · 2021-12-18T22:08:29.348Z · EA · GW

You might find Christian Tarsney's model useful for this. See also this EA Forum post on it and the discussion.

Comment by MichaelStJules on Not all x-risk is the same: implications of non-human-descendants · 2021-12-18T22:05:23.825Z · EA · GW

Interesting post!

I think if mammals were to evolve greater intelligence and were able to build and transfer knowledge like us, enough to start colonizing space (or have their descendants do so), it's quite likely they would end up with values similar to us (or within the normal range of human values), since they already have emotional empathy (emotional contagion + non-reciprocal altruism), and they would likely have to become more social and cooperative along the way.

Plausibly (although I really don't know) other animals could become intelligent, social and cooperative without developing emotional empathy, and that could be bad. They could be selfish and Machiavellian. That being said, becoming social and cooperative may coincide with the development of emotional empathy and inequity aversion, since these will promote group fitness.

I'm not sure what other animals have emotional empathy. I know chickens have emotional empathy for their young, but this might not extend to other chickens. Parenting animals probably generally have emotional empathy for their offspring. I don't know how far the emotional empathy of corvids, parrots and octopuses would extend, and these are some of the next smartest non-mammalian animals.

Comment by MichaelStJules on Against Negative Utilitarianism · 2021-12-18T21:47:33.865Z · EA · GW

For a given utility , adding more individuals or experiences with  as their utility has a marginal contribution to the total that decreases towards 0 with the number of these additional individuals or experiences, and while the marginal contribution never actually reaches 0, it decreases fast enough towards 0 (at a rate ) that the contribution of even infinitely* many of them is finite. Since it is finite, it can be outweighed. So, even infinitely many pinpricks is only finitely bad, and some large enough finite number of equally worse harms must be worse overall (although still finitely bad). In fact the same is true for any two bads with different utilities: some large enough but finite number of the worse harm will outweigh infinitely many of the lesser harm. So, this means you get this kind of weak lexicality everywhere, and every bad is weakly lexically worse than any lesser bad. No thresholds are needed.

In mathematical terms, for any  , there is some (finite)  large enough that 

because the limit (or infimum) in  of the left-hand side of the inequality is lower than the right hand side and decreasing, so it has to eventually be lower for some finite .

 

*countably

Comment by MichaelStJules on Why do you find the Repugnant Conclusion repugnant? · 2021-12-18T08:02:04.613Z · EA · GW

I never downvoted his comments, and have (just now) instead upvoted them.

However, I would interpret all of Pablo's points in his response not just as requesting clarification but also as objections to my answer, in a post that's only asking for people's reasons to object to the RC and is explicitly not about technical philosophical arguments (although it's not clear this should extend to replies to answers), just basic intuitions.

I don't personally mind, and these are interesting points to engage with. However, I can imagine others finding it too intimidating/adversarial/argumentative.

Comment by MichaelStJules on Why do you find the Repugnant Conclusion repugnant? · 2021-12-18T07:39:21.394Z · EA · GW

(FWIW, I never downvoted your comments and have upvoted them instead, and I appreciate the engagement and thoughtful questions/pushback, since it helps me make my own views clearer. Since I spent several hours on this thread, I might not respond quickly or at all to further comments.)

The question is, then, simply: "If bringing about Z sacrifices people in A, why doesn't bringing about A sacrifice people in Z?" You say that you'd be sacrificing someone "even if they would be far better off than the first person", which seems to commit you to the claim that you would indeed be sacrificing people in Z by bringing about A.

Sorry, I tried to respond to that in an edit you must have missed, since I realized I didn't after posting my reply. In short, a wide person-affecting view means that Z would involve "sacrifice" and A would not, if both populations are completely disjoint and contingent, roughly because the people in A have worse off "counterparts" in Z, and the excess positive welfare people in Z without counterparts don't compensate for this. No one in Z is better off than anyone in A, so none are better off than their counterparts in A, so there can't be any sacrifice in a "wide" way in this direction. The Nonidentity problem would involve "sacrifice" in one way only, too, under a wide view.

(If all the people in Z already exist, and none of the people in A exist, then going from Z to A by killing everyone in Z could indeed mean "sacrificing" the people in Z for those in A, under some person-affecting views, and be bad under some such views.

Under a narrow view (instead of a wide one), with disjoint contingent populations, we'd be indifferent between A and Z, or they'd be incomparable, and both or neither would involve "sacrifice".)

 

 

On value receptacles, here's a quote by Frick (on his website), from a paper in which he defends the procreation asymmetry:

For another, it feeds a common criticism of utilitarianism, namely that it treats people as fungible and views them in a quasi-instrumental fashion. Instrumental valuing is an attitude that we have towards particulars. However, to value something instrumentally is to value it, in essence, for its causal properties. But these same causal properties could just as well be instantiated by some other particular thing. Hence, insofar as a particular entity is valued only instrumentally, it is regarded as fungible. Similarly, a teleological view which regards our welfare-related reasons as purely state-regarding can be accused of taking a quasi-instrumental approach towards people. It views them as fungible receptacles for well-being, not as mattering qua individuals.29 Totalist utilitarianism, it is often said, does not take persons sufficiently seriously. By treating the moral significance of persons and their well-being as derivative of their contribution to valuable states of affairs, it reverses what strikes most of us as the correct order of dependence.30 Human wellbeing matters because people matter – not vice versa.

I haven't thought much about this particular way of framing the receptacle objection, and what I have in mind is basically what Frick wrote later: 

any reasons to confer well-being on a person are conditional on the fact of her existence.

This is a bit vague: what do we mean by "conditional"? But there are plausible interpretations that symmetric person-affecting views, asymmetric person-affecting views and negative axiologies satisfy, while the total view, reverse asymmetric person-affecting views and positive axiologies don't really seem to have such plausible interpretations (or have fewer and/or less plausible interpretations).

I have two ways in mind that seem compatible with the procreation asymmetry, but not the total view:

First, in line with my linked shortform comment about the asymmetry, a person's interests should only direct us from outcomes in which they (the person, or the given interests) exist or will exist to the same or other outcomes (possibly including outcomes in which they don't exist), and all reasons with regards to a given person are of this form. I think this is basically an actualist argument (which Frick discusses and objects to in his paper). Having reasons regarding an individual A in an outcome in which they don't exist direct us towards an outcome in which they do exist would not seem conditional on A's existence. It's more "conditional" if the reasons regarding a given outcome come from that outcome than from other outcomes.

Second, there's Frick's approach. Here's a simplified evaluative version: 

All of our reasons with regards to persons should be of the following form:

It is in one way better that the following is satisfied: if person A exists, then P(A),

where P is a predicate that depends terminally only on A's interests.

Setting P(A)="A has a life worth living" would give us reason to prevent lives not worth living. Plus, there's no P(A) we could use that would imply that a given world with A is in one way better (due to the statement with P(A)) than a given world without A. So, this is compatible with the procreation asymmetry, but not the total view.

It could be "wide" and solve the Nonidentity problem, since we can find P such that P would be satisfied for B but not A, if B would be better off than A, so we would have more reasons for A not to exist than for B not to exist.

It's also compatible with antifrustrationism and negative utilitarianism in a few ways:

  1. If we apply it to preferences instead of whole persons, with predicates like P(A)="A is satisfied"
  2. If we use predicates like "P(A)=if A has interest y, then y is satisfied at least to degree d"
  3. If we use predicates like "P(A)=A has welfare at least w", allowing for the possibility of positive welfare being better than less in an existing individual, but being perfectionistic about it, so that anything worse than the best is worse than nonexistence.

I think part of what follows in Frick's paper is about applying/extending this in a way that isn't basically antinatalist.

 

For a negative utilitarian, it seems that whether the assumption is made is in fact crucial, since the "muzak and potatoes" life is as good as it can be (it lacks any unpleasantness) whereas other lives could contain huge amounts of suffering.

Ya, this seems right to me.

 

My point is that the phenomenology of the intuitions at the interpersonal and intrapersonal levels is essentially the same, which strongly suggests that the same factor is triggering those intuitions in both cases.

What do you mean by "the phenomenology of the intuitions" here?

One important difference between the interpersonal and intrapersonal cases is that in the intrapersonal case, people may (or may not!) prefer to live much longer overall, even sacrificing their other interests. It's not clear they're actually worse off overall or even at each moment in something that might "look" like Z, once we take the preference(s) for Z over A into account. We might be miscalculating the utilities before doing so. For something similar to happen in the interpersonal case, the people in A would have to prefer Z, and then similarly, Z wouldn't seem so objectionable.

 

Although I'm not sure I'm understanding you correctly, you then seem to be suggesting that your views can in fact vindicate the claim that you'd be sacrificing your future selves or treating them as value receptacles. Is this what you are claiming? It would help me if you describe what you yourself believe, as opposed to discussing the implications of a wide variety of views.

It's more about my interests/preferences than my future selves, and not sacrificing them or treating them as value receptacles. I think respect for autonomy/preferences requires not treating our preferences as mere value receptacles that you can just make more of to get more value and make things go better, and this can rule out both the interpersonal RC and the intrapersonal RC. This is in principle, ignoring other reasons, indirect effects, etc., so not necessarily in practice.

I have moral uncertainty, and I'm sympathetic to multiple views, but what they have in common is that I deny the existence of terminal goods (whose creation is good in itself, or that can make up for bads or for other things that matter going worse than otherwise) and that I recognize the existence of terminal bads. They're all versions of negative prioritarianism/utilitarianism or very similar.

Comment by MichaelStJules on What are the most underfunded EA organizations? · 2021-12-18T04:19:59.736Z · EA · GW

Are working on cause areas that are important from a longtermist/negative utilitarian perspective

If you're only looking for organizations that look good from a longtermist negative utilitarian perspective, I would say so in the title, and early in the post. If not, longtermism, NU, CRS and QRI make more sense in an answer (or answers) than in the question itself, to avoid singling them out, which some might interpret as promotional in a way that's not upfront and assume you posted the question just to promote longtermist NU orgs, or these orgs in particular.

Comment by MichaelStJules on Why do you find the Repugnant Conclusion repugnant? · 2021-12-17T19:36:57.791Z · EA · GW

(I've made a bunch of edits to the following comment within 2 hours of posting it.)

You say that "sacrificing the welfare of just one person so that another could be born... seems wrong". But the Repugnant Conclusion is a claim about the relative value of two possible populations, neither of which is assumed to be actual. So I don't understand how you reach the conclusion that, in judging that one of these populations is more valuable, by bringing it about you'd be "sacrificing" the welfare of the possible people in the other population. The situation seems perfectly symmetrical, so either you are "sacrificing" people no matter what you do, or (what seems more plausible) talk of "sacrificing" doesn't really make sense in this context.

If you're a consequentialist whose views are transitive and complete, and satisfy the independence of irrelevant alternatives, then the RC implies what I wrote (ignoring other effects and opportunity costs). The situation is not necessarily symmetrical in practice if you hold person-affecting views, which typically require the rejection of the independence of irrelevant alternatives. I'd recommend the "wide, hard view" in The Asymmetry, Uncertainty, and the Long Term by Teruji Thomas as the view closest to common sense that satisfies the intuitions of my answer above (that I'm aware of), and the talk is somewhat accessible, although the paper can get pretty technical. This view allows future contingent good lives to make up for (but not outweigh) future contingent bad lives, but, as a "hard" view, not to make up for losses to "necessary" people, who would exist regardless. Because it's "wide", it "solves" the Nonidentity problem. The wide version would still reject the RC even if we're choosing between two disjoint contingent populations, I think because "excess" (in number) contingent people with good lives wouldn't count in this particular pairwise comparison. Another way to think about it would be like matching counterparts across worlds, and then we can talk about sacrifices as the differences in welfare between an individual and their counterpart, although I'm not sure the view entails something equivalent to this.

My own views are much more asymmetric than the views in Thomas's work, and I lean towards negative utilitarianism, since I don't think future contingent good lives can make up for future contingent bad lives at all.

How are you not treating individuals as mere vessels/receptacles for value when, in deciding between two worlds both of which contain suffering but differ in the number of people they contain, you bring about the world that contains less suffering? What do you tell the person whom you subject to a life of misery so that some other person, who would be even more miserable, is not born?

I tell them that I did it to prevent a greater harm that would have otherwise been experienced. The foregoing of benefit caused by someone never being born would not be experienced by that non-existent person. I have some short writing on the asymmetry here that I think can explain this better.

You have said that you don't share the intuition that positive welfare has intrinsic value. But lacking this intuition, how can you compare the value of two worlds that differ only in how much positive welfare they contain?

Lives most people consider good overall can still involve disappointment or suffering, so the RC doesn't necessarily differ only in how much positive welfare there is, depending on how exactly we're imagining it. If we're only talking about positive welfare and no negative welfare, preferences aren't more frustrated/less satisfied than otherwise, and everyone is perfectly content in the "repugnant" world, then I wouldn't object. If I had to make a personal sacrifice to bring someone into existence, I would probably not be perfectly content, possibly unless I thought it was the right thing to do (although I might feel some dissatisfaction either way, and less if I'm doing what I think is the right thing).

Plus, it's worth sharing my more general objection regardless of my denial of positive welfare, since it may reflect others' views, and they can upvote or comment to endorse it if they agree.

The Repugnant Conclusion arises also at the intrapersonal level, so it would be very surprising if the reason we find it counterintuitive, insofar as we do, at the interpersonal level has to do with factors—such as treating people as mere receptacles of value or sacrificing people—that are absent at the intrapersonal level.

Assuming intrapersonal and interpersonal tradeoffs should be treated the same (ignoring indirect effects), yes. It's not obvious that they should be, and I think common sense ethics does not treat them the same.

But even then, the intrapersonal version (+welfarist consequentialism) also violates autonomy and means I shouldn't do whatever I want in my world, so my objection is similar. I think "preference-affecting" views (person-affecting views applied at the level of individual preferences/desires, especially Thomas's "hard, wide view") would likely fare better here for structurally similar reasons, so the "solution" could be similar or even the same.

Symmetric total preference utilitarianism and average preference utilitarianism would imply that it's good for a person to create enough sufficiently strong satisfied preferences in them, even if it means violating their consent and the preferences they already have or will have. Classical utilitarianism implies involuntary wireheading (done right) is good for a person. Preference-affecting views and antifrustrationism (negative preference utilitarianism) would only endorse violating consent or preferences for a person's own sake in ways that depend on preferences they would have otherwise or anyway, so you violate consent/some preferences to respect others (although I think antifrustrationism does worse than asymmetric preference-affecting views for respecting preferences/consent, and deontological constraints or limiting aggregation would likely do even better).

Comment by MichaelStJules on Why do you find the Repugnant Conclusion repugnant? · 2021-12-17T17:32:58.558Z · EA · GW

I have asymmetric person-affecting intuitions, and I think the Repugnant Conclusion is a clear example of treating individuals as mere vessels/receptacles for value. Sacrificing the welfare of just one person so that another could be born — even if they would be far better off than the first person — seems wrong to me, ignoring other effects. That I could have an obligation to bring people into existence just for their own sake and at an overall personal cost seems wrong to me. The RC just seems like a worse and more extreme version of this.

In a hypothetical world where I'm the only one around, I feel I basically should be allowed to do whatever I want, as long as no one else will come into existence, and I should have no reason to bring them into existence. In my world, I should do whatever I want. If no one is born, I'm not harming anyone else or failing in my obligations to others, because they don't and won't exist to be able to experience harm (or experience an absence of benefit or worse benefits).

That I should make sacrifices to prevent people with bad lives from being born or to help future people who would exist anyway (including ensuring better off people are born instead of worse off people) does seem right to me. If and because these people will exist, I can harm them or fail to prevent harm to them, and that would be bad.

I have some more writing on the asymmetry here.