Posts

Physical theories of consciousness reduce to panpsychism 2020-05-07T05:04:39.502Z · score: 26 (16 votes)
Replaceability with differing priorities 2020-03-08T06:59:09.710Z · score: 17 (9 votes)
Biases in our estimates of Scale, Neglectedness and Solvability? 2020-02-24T18:39:13.760Z · score: 89 (42 votes)
[Link] Assessing and Respecting Sentience After Brexit 2020-02-19T07:19:32.545Z · score: 16 (5 votes)
Changes in conditions are a priori bad for average animal welfare 2020-02-09T22:22:21.856Z · score: 16 (10 votes)
Please take the Reducing Wild-Animal Suffering Community Survey! 2020-02-03T18:53:06.309Z · score: 24 (13 votes)
What are the challenges and problems with programming law-breaking constraints into AGI? 2020-02-02T20:53:04.259Z · score: 6 (2 votes)
Should and do EA orgs consider the comparative advantages of applicants in hiring decisions? 2020-01-11T19:09:00.931Z · score: 15 (6 votes)
Should animal advocates donate now or later? A few considerations and a request for more. 2019-11-13T07:30:50.554Z · score: 19 (7 votes)
MichaelStJules's Shortform 2019-10-24T06:08:48.038Z · score: 7 (4 votes)
Conditional interests, asymmetries and EA priorities 2019-10-21T06:13:04.041Z · score: 13 (14 votes)
What are the best arguments for an exclusively hedonistic view of value? 2019-10-19T04:11:23.702Z · score: 7 (4 votes)
Defending the Procreation Asymmetry with Conditional Interests 2019-10-13T18:49:15.586Z · score: 18 (14 votes)
Ex ante prioritarianism and negative-leaning utilitarianism do not override individual interests 2019-07-04T23:56:44.330Z · score: 10 (9 votes)

Comments

Comment by michaelstjules on Examples of people who didn't get into EA in the past but made it after a few years · 2020-05-28T00:07:09.720Z · score: 2 (1 votes) · EA · GW

No

Comment by michaelstjules on How Much Leverage Should Altruists Use? · 2020-05-27T02:05:38.640Z · score: 4 (2 votes) · EA · GW

Some related posts on LessWrong:

  • The EMH Aten't Dead, especially section Modern Edges are Completely Ridiculous and the discussion of value investing and Buffett's success there. The author writes this here, though:
This might be a good time to confess that I’m currently testing a momentum (trend-following) strategy with a small part of my portfolio, which, uh… flies in the face of all of the above. It’s performed almost comically badly through the COVID-19 crash, which ought to have been a delicious punishment for my hypocrisy and hubris, except that I just so happened to over-rule the crucial decision (i.e, I got lucky). I’m also tracking the counterfactual where I stuck with the experiment: if momentum really does turn out to be a persistent anomaly in otherwise efficient markets, then that would be fascinating—either way, I’ll try to write up a review some time later this year.
Comment by michaelstjules on How Much Leverage Should Altruists Use? · 2020-05-26T21:01:26.940Z · score: 2 (1 votes) · EA · GW

(More speculation by me, good chance of being way off)

Another similar investment idea: Instead of buying a managed futures fund, buy value and momentum funds while shorting the broad market to produce net zero stock exposure, and then apply lots of leverage.

I feel like it's worth emphasizing the benefits of this more. Can't this significantly reduce the risk and volatility of your portfolio? OTOH, some of the funds you mention have only been around for a few years, and they have done really poorly, as Paul pointed out. I don't have confidence that they're well-managed.

I was looking into some momentum (and growth) ETFs, and found that several were pretty heavy in Apple, Amazon, Google, Microsoft and Facebook (not that these stocks should be avoided, but you might want to diversify more on top of investing in these ETFs). I found a few that were more diversified and performed really well over the past 5-10 years (although, as usual, past performance may not be indicative of future performance) and decently during the pandemic:

  • XMMO: Invesco S&P MidCap Momentum ETF
  • RPG: Invesco S&P 500 Pure Growth ETF (also weighs momentum)
  • SPGP: Invesco S&P 500 GARP ETF (just based on growth, not momentum, AFAIK)
  • PDP: Invesco DWA Momentum ETF

For value stocks, what about buying VOOV or Buffett's Berkshire Hathaway BRK.B? Worth keeping in mind that BRK.B has dropped ~50% during some crashes, and VOOV has only been around since 2010.

For those interested in global health and poverty, you may end up (very) correlated with Gates, Buffett and the Gates Foundation if you're investing in value ETFs and the strategies happen to lead to similar choices, and obviously if you buy BRK.B. I think most of Gates' wealth is no longer in Microsoft, but I'm not sure how much he has left in it.

Comment by michaelstjules on Examples of people who didn't get into EA in the past but made it after a few years · 2020-05-26T16:18:40.660Z · score: 6 (4 votes) · EA · GW

Thanks!

Oh, I should also add that I read and commented on several of CE's reports (commenting on the EA Forum posts, and I also read other effective animal advocacy research). I did this leading up to my first application that was rejected, but I think my recent feedback was much more useful, and I was encouraged to apply following a conservation about my feedback.

Comment by michaelstjules on Examples of people who didn't get into EA in the past but made it after a few years · 2020-05-26T10:57:56.079Z · score: 18 (9 votes) · EA · GW

Do research internships count? I just started one at Charity Entrepreneurship.

I think I might have described my history in one of your other posts/questions.

I first applied to ACE and GiveWell research internships in 2016, back when I was still new to EA, but didn't get either. The extent of my EA involvement at the time was over Facebook.

Then I studied for a master's with the intention to earn to give, got involved with my local EA group and started running it last summer, started commenting and writing on the EA Forum, and earned to give, although I hadn't made any significant donations yet. Then I applied to Charity Entrepreneurship and ACE internships in August and November/December, respectively, and didn't get either. Then I donated about 45% of my 2019 income in December, wrote an EA Forum post that won an EA Forum prize, and attended my first EA Global (the virtual one in March). I talked with someone from CE at a local EA group meetup, and my current supervisor at CE at EA Global, and I think I made decent impressions on them. Then I applied to CE's research internship last month and was accepted this time.

I imagine that if I get a full-time position in EA research, this internship will be an important contributing factor. I don't expect it to guarantee me a full-time position, though, since they're very competitive and pretty rare.

Comment by michaelstjules on Determinants of happiness in poor countries · 2020-05-24T15:20:21.852Z · score: 9 (3 votes) · EA · GW

Our World in Data has some nice graphs, using life satisfaction, all with sources.

Comment by michaelstjules on How Much Leverage Should Altruists Use? · 2020-05-23T03:53:18.721Z · score: 2 (1 votes) · EA · GW

(I'll preface by saying that I'm a new to finance, so I could be very wrong.)

I think it's plausible that an isoelastic utility function in wealth is a poor fit, even for those who are risk-neutral in their altruism (and even completely impartial). I wouldn't be surprised if our actual utility functions

1. have decreasing marginal returns at low wealth (and maybe even increasing marginal returns at some levels of low wealth),

2. have a roughly constant marginal returns for a while (the same rightmost derivative as in 1), at the rate of the best current donation opportunities, and

3. have decreasing marginal returns again at very high levels of wealth (maybe billions or hundreds of millions, within a few orders of magnitude of Good Ventures' funds.)

1 is because of personal risk aversion and/or better returns on self-investment than donations compared to 2, where the returns come mainly from donations. 3 is because of eventually marginally decreasing altruistic returns on donations.

I made a graph to illustrate. I think region 2 is probably much larger relative to the other regions, and 1 is probably much smaller than 3. I also think this is missing some temporal effects for 1: you need money every year to survive, not just in the long run, and donation opportunities may be better or worse in the future.

For this reason and psychological reasons, it might be better to compartmentalize your wealth and investments into:

A. Short-term expenses, including costs of living, fun stuff, and maybe some unforeseen expenses (school, medical expenses, unique donation opportunities during times where your investments are doing poorly, to avoid pulling from them). This should be pretty low risk. This is what your chequing account, high-interest savings account, CDs (certificates of deposit, GICs in Canada), and maybe bonds could be for.

B. Retirement.

C. Altruistic investments and donations. Here you can take on considerable risk and use high amounts of leverage, maybe even higher than what you've recommended. I would recommend against any risks that could leave you owing a lot of money, even in the short-term, enough to cause you to need to withdraw from A or B. Risk neutral altruists can maximize expected long-run returns here, although discounting long-run returns that go into scenario 3. Because of A and B, we're past scenario 1, so either in 2 or 3. Your mathematical arguments could approximately apply, with caveats, if most of the expected gains come from staying within 2.

If you plan to buy a house, that might deserve its own category. Your time frame is usually longer than in A but shorter than in B.

Comment by michaelstjules on Comparisons of Capacity for Welfare and Moral Status Across Species · 2020-05-22T21:00:49.960Z · score: 2 (1 votes) · EA · GW
But then I can make the same claim again: why should we be confident we've got the percentage of the capacity right?

I think even if we're not confident, bounds on welfare capacity can still be useful. For example, if I know that A produces X net units of good (in expectation), and B produces between Y and Z net units of good, then under risk-neutral expected value maximization, X < Y would tell me that B's better, and X > Z would tell me that A's better. The problem is where Y < X < Z. And we can build a distribution over the percentage of capacity or do a sensitivity analysis, something similar to this, say.

Comment by michaelstjules on Comparisons of Capacity for Welfare and Moral Status Across Species · 2020-05-21T05:43:05.373Z · score: 3 (2 votes) · EA · GW

The original context for that comment was in a discussion where moral agency was proposed to be important, but I think you could substitute other psychological features (autonomy, intelligence, rationality, social nature, social attachments/love, etc.) for moral agency and the same argument would apply to them.

Comment by michaelstjules on How should longtermists think about eating meat? · 2020-05-21T05:38:59.477Z · score: 2 (1 votes) · EA · GW

I think the Faunalytics studies discuss this. I think why people were vegetarian/vegan in the first place is a big factor, since the recidivism rate for vegans motivated by animal protection was only about 50%. See my other comment.

Comment by michaelstjules on How should longtermists think about eating meat? · 2020-05-21T05:36:08.855Z · score: 2 (1 votes) · EA · GW

FWIW, the rate was ~50% for vegans who were motivated by animal protection, and ~70% for vegetarians (including vegans) who were motivated by animal protection, based on table 17 on p.18 here.

For vegans who were motivated by animal protection, here's the recidivism rate calculation:

The recidivism rate was about 84% of vegetarians motivated by health, who made up more than half, and 86.6% for vegetarians not motivated by animal protection. Actually, only 27% of former vegetarians and 27% of former vegans were motivated by animal protection, even though those motivated by animal protection make up 70% and 62% of current vegetarians and current vegans, respectively. Also see Tables 9 and 10.

I don't think it's surprising that people who go veg*n other than for animals go back to eating meat. It could be evidence of some cost, but it could also mainly be evidence that most people who go veg*n do so for reasons they eventually no longer found compelling, so even small costs would have been enough to bring them back to eating meat.

They also go over difficulties people had with their diets in that study, too, though.

Comment by michaelstjules on Comparisons of Capacity for Welfare and Moral Status Across Species · 2020-05-18T19:49:20.910Z · score: 3 (2 votes) · EA · GW

I think the word "consent" might have been a somewhat poor choice, since it has more connotations than we need. Rather, the concept is closer to "bearability" or just the fact that an individual's personal preferences seem to involve lexicality, which the two articles I linked to get into. For suffering, it's when someone wants to make it stop, at any cost (or any cost in certain kinds of experiences, say, e.g. any number of sufficiently mild pains, or any amount of pleasure).

There are objections to this, too, of course:

1. We have unreliable intuitions/preferences involving large numbers (e.g. a large number of pin pricks vs torture).

2. We may be trying to generalize from imagining ourselves in situations like sufficiently intense suffering in which we can't possibly be reflective or rational, so any intuitions coming out of this would be unreliable. Lexicality might happen only (perhaps by definition) when we can't possibly be reflective or rational. Furthermore, if this is the case, then this is a reason against the conjunction of trusting our own lexicality directly and not directly trusting the lexicality of nonhuman animals, including simpler ones like insects.

3. We mostly have unreliable intuitions about the kinds of intense suffering people have lexical preferences about, since few of us actually experience it.

That being said, I think each of these objections cuts both ways: they only tell us our intuitions are unreliable in these cases, they don't tell us whether lexicality should be accepted or rejected. I can think of arguments for each:

1. We should trust personal preferences (at least when informed by personal experience), even when they're unreliable, unless they are actually inconsistent with intuitions we think are more important and less unreliable, which isn't the case for me, but might be for others.

2. We should reject unreliable personal preferences that cost us uniformity in our theory. (The personal preferences are unreliable either way, but accommodating lexical ones make our theory less uniform, assuming we want to accept aggregating in certain ways in our theory in the first place, which itself might be contentious.)


I would be happy to discuss over a call, but it might actually be more productive to talk to Magnus Vinding if you can, since he's read and thought much more about this.

Comment by michaelstjules on Comparisons of Capacity for Welfare and Moral Status Across Species · 2020-05-18T18:31:51.769Z · score: 2 (1 votes) · EA · GW
The way I'm using the terms, moral status and capacity for welfare are independent of realized welfare. Increasing realized welfare (e.g., through art/entertainment) doesn't raise one's capacity for welfare or moral status.

Couldn't it change the "proper subset of physically possible worlds" (or the kinds of sets of these) we use to define the welfare capacity of individuals of a given species? Where before art/entertainment might not have been included, now it is. Either we should have always included it and we were mistaken before for not doing so, since we just didn't know that this was a possibility that should have been included, or the kinds of sets we could use did actually change.

The answer depends on where we draw the line between potential and capacity, which naturally is going to be contentious. I'm hopeful that not much in practice hangs on this question, but I'm open to examples where it does.

The normal development after conception seems like such an example. Obviously it matters for the abortion debate, but, for animals, I've heard the suggestion that juveniles of species with extremely high infant/juvenile mortality rates have little use for the capacity to suffer during this period of high mortality, so this would be a reason to not develop it until later, since it has energetic costs. This was based on Zach Freitas-Groff's paper on wild animal suffering.

Comment by michaelstjules on Comparisons of Capacity for Welfare and Moral Status Across Species · 2020-05-18T18:08:32.613Z · score: 2 (1 votes) · EA · GW
As an example of how capacity for welfare might be distinct from moral status, one might be a hedonist about welfare (and thus think that capacity for welfare is wholly determined by possible range of valenced experience and maybe subjective experience of time) but think that moral status is determined by degree of autonomy or rationality. The precise definition of welfarism is contentious, so I'll leave it to you to decide if that's a violation of welfarism.

I don't see how you could motivate that if we accept welfarism (unless we accept objective list theories, but again, that seems to be through welfare capacity). Why are degree of autonomy and rationality non-instrumentally relevant? Why not the width of the visible electromagnetic spectrum, or whether or not an individual can see at all, or other senses?

That strikes me as the wrong result.

I really don't know. It's hard for me to have an intuition either way since both seem wrong to me, anyway. It seems better to me to double penalize an individual for things that are relevant to welfare than to non-instrumentally penalize individuals based on things which are at most instrumentally relevant to welfare.

Comment by michaelstjules on Comparisons of Capacity for Welfare and Moral Status Across Species · 2020-05-18T17:53:28.398Z · score: 2 (1 votes) · EA · GW

Thanks! I wasn't aware of transworld identity being a separate problem.

I'm not sure how much genetic change an individual can undergo whilst remaining the same individual. (I suspect lots, but intuitions seem to differ on this question.)

I doubt that there will be a satisfying answer here (especially in light of transworld identity), and I think this undermines the case for different degrees of moral status. If we want to allow morally relevant features to sometimes vary continuously without changing identity, then, imo, the only non-arbitrary lines to draw would be where a feature is completely absent in one but present in another. But, I think there are few features that are non-instrumentally morally relevant; indeed only welfare and welfare capacity on their own seem like they could be morally relevant. So, it seems this could only work if there are different kinds of welfare, like in objective list theories, or with higher and lower pleasures.

As I mention in footnote 9, it's also unclear how much genetic change an individual can undergo whilst remaining the same species.

I think species isn't fundamental anyway; its definition is fuzzy, and it's speciesist to to refer to it non-instrumentally. It's not implausible to me that, if identity works at all (which I doubt), that a pig in one world is identical to an individual who isn't a pig in another world.

Comment by michaelstjules on Comparisons of Capacity for Welfare and Moral Status Across Species · 2020-05-18T09:10:13.770Z · score: 8 (4 votes) · EA · GW
We want to circumscribe the set of possible worlds so that it includes all and only normal variation in the welfare values of species-typical animals.[13]
(...)
Admittedly, filling in the details of this relativization will be complex. It’s not at all clear how to define ‘normal variation’ or ‘species-typical animal.’ I set aside that difficulty for now.

If meant statistically, it could be that "normal" still happens to be pretty circumstantial. Most nonhuman animals, for example, probably don't get much intellectual stimulation without humans, but some actually do through things like puzzles and games. I'm guessing you would want to count that in normal, because it's practical possibility today? But then would that mean that before we started giving animals puzzles and games, they had less moral status? This feels very different from enhancement.

And if we define moral status this way, it could be that human moral status has been increasing over time, too, due to environmental/social factors, like art and entertainment.

It could be that human moral status is actually decreasing or will decrease, because humans suffer less in modern times and will continue to suffer less and less in the future, without much increase to our peaks of happiness, because of the priority we give to suffering and its causes.

Comment by michaelstjules on Comparisons of Capacity for Welfare and Moral Status Across Species · 2020-05-18T07:02:52.490Z · score: 3 (3 votes) · EA · GW

I wrote some thoughts related to moral status (not specifically welfare capacity) and personal identity here (EDIT: to clarify, the context was a discussion of the proposed importance of moral agency to moral status, but you could substitute many other psychological features for moral agency and the same argument should apply):

It seems to me that any specific individual is only a moral agent sometimes, at most. For example, if someone is so impaired by drugs or overcome with emotion that it prevents them from reasoning, are they a moral agent in those moments? Is someone a moral agent when they're asleep (and dreaming or not dreaming)? Are these cases so different from removing and then reinserting and reattaching the brain structures responsible for moral agency? In all these cases, the connections can't be used due to the circumstances, and while the last case is the clearest since the structure has been removed, you could say the structure has been functionally removed in the others. I don't think it's accurate to say "they can engage in rational choice" under these circumstances.
Perhaps people are moral agents most of the time, but wouldn't your account mean their suffering matters less in itself while they aren't moral agents, even as normally developed adults? In particular, I think intense suffering will often prevent moral agency, and while the loss of agency may be bad in itself (although I'm not sure I agree), the loss of agency from sleep would be similarly bad in itself, so this shouldn't be much worse than a human being forced to sleep and a nonhuman animal suffering as intensely, ignoring differences in long-term effects, and if the nonhuman animal's suffering doesn't matter much in itself relative to the (temporary) loss of moral agency, then neither would the human's. Torturing someone may often not be much worse than forcing someone to sleep (ignoring long-term effects), if the torture is intense enough to prevent moral agency. Or, deliberately, coercively and temporarily preventing a person's moral agency and torturing them isn't much worse than just deliberately, coercively and temporarily preventing their moral agency. This seems very counterintuitive to me, and I certainly wouldn't feel this way about it if I were the victim. Suffering in itself can be far worse than death.

Now, let's suppose identity and moral status are preserved to some degree in more commonsensical ways, and the human prefrontal cortex confers extra moral status. Then, there might be weird temporal effects. Committing to an act of destroying someone's prefrontal cortex and torturing them would be worse than destroying their prefrontal cortex and then later and independently torturing them, because in the first case, their extra moral status still applies to the torture beforehand, but in the second, once their prefrontal cortex is destroyed, they lose that extra moral status that would make the torture worse.

Comment by michaelstjules on Comparisons of Capacity for Welfare and Moral Status Across Species · 2020-05-18T05:17:32.204Z · score: 4 (2 votes) · EA · GW

And while it might be the case that nonhuman animals act lexically, since they aren't as future-oriented and reflective like us, their behaviour on its own might not be a good indication of moral lexicality. If we establish that an animal is suffering to an extent similar to how we suffer when we suffer lexically, then that's a reason to believe that suffering matters lexically, and if we establish that an animal is suffering to an extent similar to how we suffer when we don't suffer lexically, then that's a reason to believe that suffering doesn't matter lexically. In this way, it could turn out to be the case that insects act lexically, but their suffering doesn't matter lexically. Of course, it could turn out to be the case that insects do suffer in ways that matter lexically.

Comment by michaelstjules on Comparisons of Capacity for Welfare and Moral Status Across Species · 2020-05-18T04:47:29.050Z · score: 9 (3 votes) · EA · GW
Although I grant that this position has some initial intuitive appeal, I find it difficult to endorse—or, frankly, really understand—upon reflection. For this position to succeed, there would have to exist some sort of unbridgeable value gap between small interests and big interests. And while the mere existence of such a gap is perhaps not so strange, the placement of the gap at any particular point on a welfare or status scale seems unjustifiably arbitrary. It’s not clear what could explain the fact that the slight happiness of a sufficient number of squirrels never outweighs the large happiness of a single chimpanzee. If happiness is all that non-instrumentally matters, as Kazez assumes for the sake of argument, we can’t appeal to any qualitative differences in chimpanzee versus squirrel happiness.[76] (It’s not as if, for example, that chimpanzee happiness is deserved while squirrel happiness is obtained unfairly.) And how much happier must chimpanzees be before their happiness can definitively outweigh the lesser happiness of other creatures? What about meerkats, who we might assume for the sake of argument are generally happier than squirrels but not so happy as chimpanzees? There seems to be little principled ground to stand on. Hence, while we should acknowledge the possibility of non-additivity here, we should probably assign it a fairly low credence.

"Consent-based" approaches might work. They've been framed in the case of suffering, but could possibly work for happiness, too. Actually, I suppose this is similar to Mill's higher and lower pleasures (EDIT: as you mention in footnote 76), but without being dogmatic about what counts as a higher or lower pleasure even to the point of rejecting the preferences of those who experience both. See:

https://reducing-suffering.org/happiness-suffering-symmetric/#Consent-based_negative_utilitarianism

http://centerforreducingsuffering.org/clarifying-lexical-thresholds/

And, indeed, if we want to determine levels of suffering and pleasure based on the tradeoffs people would make, you will get lexicality unless you reject some tradeoffs, because some people have lexical views (myself included, if I had a very long life, I'd prefer many pin pricks one at a time spread out across days than a full day of torture with no long-term effects). How else could we ground cardinal degrees of suffering and pleasure except through individual tradeoffs?

Comment by michaelstjules on Comparisons of Capacity for Welfare and Moral Status Across Species · 2020-05-18T04:39:12.312Z · score: 2 (1 votes) · EA · GW
Hence, if welfare constituents or moral interests are non-additive, we may not be able to use status-adjusted welfare to compare interventions.

I don't see why you couldn't combine them. You could aggregate non-additively based on status-adjusted welfare instead of welfare, or moral status could be a different kind of input to your non-additive aggregation. Your social welfare function could be a function of the sequence of pairs of moral status and welfare in outcome : .

Comment by michaelstjules on Comparisons of Capacity for Welfare and Moral Status Across Species · 2020-05-18T04:18:05.806Z · score: 2 (1 votes) · EA · GW
Many people have the intuition that human babies have the same moral status as human adults despite the fact that adults are much more cognitively and emotionally sophisticated than babies.[61] Many people also have the intuition that severely cognitively-impaired humans, whose intellectual potential has been permanently curtailed, have the same moral status as species-typical humans.[62] And many people have the intuition that normal variation in human intellectual capacities makes no difference to moral status, such that astrophysicists don’t have a higher moral status than social media influencers.[63] These intuitions are easier to accommodate if moral status is discrete.[64]

I don't think we can accommodate "Many people also have the intuition that severely cognitively-impaired humans, whose intellectual potential has been permanently curtailed, have the same moral status as species-typical humans.[62]" no matter the theoretically possible extent of impairment (as long as the individual remains sentient, say) without abandoning degrees of moral status completely. Maybe actual sentient humans have never been sufficiently impaired for this, and that's what their intuitions refer to?

Also, if moral status is discrete but can differ between two individuals because of features that are present in both based on the degrees of expression, then the cutoff is going to be arbitrary, and that seems like a good argument against discrete statuses. So, it seems that different discrete moral statuses could only be justified by presence or complete absence of features. But then we get into weird (but perhaps not implausible) discontinuities, where an individual A could have an extra feature to a vanishing degree, but be a full finite degree of moral status above another individual, B, who is identical except for that feature, and have as much as status an individual, C, who has that feature to a very high degree, but is otherwise identical to both. We can make the degree that the feature is present arbitrarily small in A, and this would still hold.

Comment by michaelstjules on Comparisons of Capacity for Welfare and Moral Status Across Species · 2020-05-18T04:11:09.806Z · score: 3 (3 votes) · EA · GW

Using welfare capacity to determine moral status also solves the problem of how to weight different features, including combination effects, in a non-arbitrary way, if we can define welfare capacity non-arbitrarily (although I'm skeptical of this, see my other comment). That being said, the ranking we get out of this for moral status is still only ordinal.

Comment by michaelstjules on Comparisons of Capacity for Welfare and Moral Status Across Species · 2020-05-18T04:04:13.734Z · score: 9 (3 votes) · EA · GW
One reading is that capacity for welfare directly determines (at least in part) moral status. The other reading is that moral status is grounded in various capacities that also just so happen to be relevant for determining capacity for welfare. The first interpretation runs the risk of double-counting. Even before considering moral status, we can say that lives that contain more and more of non-instrumental goods are more valuable than lives that contain fewer and less of those non-instrumental goods. It’s not clear why those lives should gain additional moral value—in virtue of a higher moral status—merely because they were more valuable in the first place. For this reason, I think it makes more sense to think that capacity for welfare does not play a direct role in determining moral status, though many of the features relevant for welfare capacity are also relevant for moral status.

I actually find the opposite conclusion more plausible: capacity for welfare is what directly determines moral status (if unitarianism is false; I think unitarianism is true), and specific features/capacities only matter through capacity for welfare and effects on welfare. If we're saying that there are features that determine moral status not through their effects on welfare or capacity for welfare, then it sounds like we're rejecting welfarism. We're saying welfare matters, but it matters more in beings with feature X, but not because of how X matters for welfare. How can X matter regardless of its connection to welfare? That seems pretty counterintuitive to me, as a welfarist. Or am I misunderstanding?


Maybe it's something like this?


Through welfare capacity:

Premise 1. Capacity for welfare (partially) determines moral status.

Premise 2. Feature X (partially) determines capacity for welfare.

Conclusion. Feature X (partially) determines moral status through capacity for welfare.


The other approach:

Premise 1'. If a feature X (partially) determines (actual welfare or) capacity for welfare, then it (partially) determines moral status.

Premise 2'. Feature X (partially) determines (actual welfare or) capacity for welfare.

Conclusion'. Feature X (partially) determines moral status because of (but not necessarily through) capacity for welfare.


Premise 1' seems less intuitive to me than Premise 1. If a feature determines actual welfare, that's already in our moral calculus without need for moral status. As a welfarist, it seems therefore that a feature can only determine moral status because it determines welfare capacity, unless there's some other way the feature can be connected to welfare. If this is the case, how else could it plausibly do this except through welfare capacity?

Comment by michaelstjules on Comparisons of Capacity for Welfare and Moral Status Across Species · 2020-05-18T03:20:22.613Z · score: 3 (3 votes) · EA · GW
If there is a human being that currently scores 10 out of 100 and a mouse that currently scores 9 out of 10, prioritarianism and egalitarianism imply, all else equal, that we ought to increase the welfare of the mouse before increasing the welfare of the human.

To clarify, this is if we're increasing their welfare by the same amount, right? Prioritarianism and egalitarianism wouldn't imply that it's better for the mouse to be moved to 10 than for the human to be moved to 100.

Tatjana Višak (2017: 15.5.1 and 15.5.2) argues that any welfare theory that predicts large differences in realized welfare between humans and nonhuman animals must be false because, given a commitment to prioritarianism[52] or egalitarianism,[53] such a theory of welfare would imply that we ought to direct resources to animals that are almost as well-off as they possibly could be.

It seems like the opposite could be true in theory with an antifrustrationist or negative account of welfare where the max is 0, if an individual human's welfare is harder to maximize, say, given the more varied and/or numerous preferences or stronger interests we have (e.g. future-oriented preferences), although in practice, average nonhuman animal life for many species, wild or farmed, does seem to involve more suffering (per second) to me.

Comment by michaelstjules on Comparisons of Capacity for Welfare and Moral Status Across Species · 2020-05-18T02:27:56.009Z · score: 7 (3 votes) · EA · GW
There are, however, countervailing considerations. While it’s true that sophisticated cognitive abilities sometimes amplify the magnitude of pain and pleasure, those same abilities can also act to suppress the intensity of pain and pleasure.[35] When I go to the doctor for a painful procedure, I know why I’m there. I know that the procedure is worth the pain, and perhaps most importantly, I know that the pain is temporary. When my dog goes to the vet for a painful procedure, she doesn’t know why she’s there or whether the procedure is worth the pain, and she has no idea how long the pain will last.[36] It seems intuitively clear that in this case superior cognitive ability reduces rather than amplifies the painful experience.[37]

Anecdotally, I basically started on my journey towards EA because I read something like this in the case of children hospitalized for chronic illness, from the textbook Health Psychology by Shelley E. Taylor:

Although many children adjust to these radical changes in their lives, some do not. Children suffering from chronic illness exhibit a variety of behavioral problems, including rebellion and withdrawal (Alati et al., 2005). They may suffer low self-esteem, either because they believe that the chronic illness is a punishment for bad behavior or because they feel cheated because their peers are healthy.

Maybe this also highlights another side to moral status about why many humans care more about children: vulnerability and innocence (maybe more naivety than lack of guilt).

Comment by michaelstjules on Comparisons of Capacity for Welfare and Moral Status Across Species · 2020-05-18T02:08:01.215Z · score: 9 (3 votes) · EA · GW
If welfare is a unified concept and if welfare is a morally significant category across species, it seems as if invariabilism is the better option. Invariabilism is the simpler view, and it avoids the explanatory pitfalls of variabilism at little intuitive cost. While we should certainly leave open the possibility that variabilism is the correct view, in what follows I will assume invariabilism.

These also seems like reasons to reject objective list theories and higher and lower pleasures in favour of simpler hedonistic or desire-fulfilment theories of welfare.

“It is better to be a human being dissatisfied than a pig satisfied; better to be Socrates dissatisfied than a fool satisfied. And if the fool, or the pig, are of a different opinion, it is because they only know their own side of the question. The other party to the comparison knows both sides”

But what if the human or Socrates disagrees that their pleasures are higher? It seems like we'd be overriding preferences to claim that certain kinds of pleasures are higher pleasures, and if some people who experience both don't recognize any pleasures as higher, we'd have to explain why they're wrong, and it would also seem to follow that most people are making pretty bad tradeoffs in their lives by not prioritizing higher pleasures enough.

Comment by michaelstjules on Comparisons of Capacity for Welfare and Moral Status Across Species · 2020-05-18T01:59:54.511Z · score: 11 (6 votes) · EA · GW
Octopuses are solitary creatures and thus plausibly will never experience true friendship or love.

Maybe if you drug them. And if drug effects do not count towards someone's capacities, this might have important moral consequences in cases of mental illness in humans, like depression.

Possibly, they tend to be solitary due to zero-sum competition, and this happens to be circumstantial. See how this octopus and human interact.

Also, mother octopuses often die protecting their eggs, although I'm not aware of them raising their young.

Of course, maybe love and friendship aren't good ways to describe these. Maybe octopuses are more like bees than cows in their maternalism.

If moral agency is a requirement for virtue, fish plausibly cannot be virtuous.

I think this would depend on how narrowly you define agency. If it requires abstract reasoning, maybe not? I think a case could be made for cleaner wrasses, who seem to pass a version of the mirror test and have complex social behaviours. Maybe groupers and moray eels, too, because of their cooperation in hunting?

Comment by michaelstjules on Comparisons of Capacity for Welfare and Moral Status Across Species · 2020-05-18T01:19:55.490Z · score: 3 (2 votes) · EA · GW

Very excited about this series. Thanks!

I'm still getting through your post, so I apologize if this is addressed later in it.

In somewhat formal terms, the capacity for welfare for some subject, S, is determined by the range of welfare values S[12] experiences in some proper subset of physically possible worlds.

EDIT: I don't think it necessarily means rejecting independence of irrelevant alternatives (IIA), but doing so might be part of some approaches.

I think this means rejecting the independence of irrelevant alternatives (IIA), which is something consequentialists typically take for granted, often without even knowing, by simply assuming we can rank all conceivable outcomes according to a single ranking. Rejecting it means whether choice A is better or worse than choice B can depend on what other alternatives are available. I'm not personally convinced IIA is true or false (and I think rejecting it can resolve important paradoxes and impossibility results in population ethics, like the repugnant conclusion), but I wouldn't want to reject IIA to define and value something like capacity.

Another assumption this seems to make is that S is actually the same subject across different outcomes (in which they have different levels of welfare). I think there's a better argument against this in cases of genetic enhancement, which could be used to support valuing capacities if we think subjects who differ genetically or significantly in capacities are different subjects, but I also think attempts to identify subjects across outcomes or time are poorly justified, pretty arbitrary and face good objections. This is the problem of personal identity, and Parfit's Relation R seems like the best solution I'm aware of, but it also seems too arbitrary to me. I lean towards empty individualism.

Comment by michaelstjules on How should longtermists think about eating meat? · 2020-05-17T23:53:40.947Z · score: 3 (2 votes) · EA · GW
I'd call this mid term rather than long term, but the impacts of animal agriculture on climate change, zoonotic disease spread and antibiotic resistance are significant.

Aren't those extinction risks, although perhaps less severe or likely to cause extinction than others, according to EAs?

Comment by michaelstjules on Applying speciesism to wild-animal suffering · 2020-05-17T23:43:03.764Z · score: 3 (2 votes) · EA · GW

Related: Legal Personhood and the Positive Rights of Wild Animals by Jay Shooster for Wild-Animal Suffering Research (which has been merged into Wild Animal Initiative).

Comment by michaelstjules on How should longtermists think about eating meat? · 2020-05-17T23:27:48.020Z · score: 2 (1 votes) · EA · GW

Similar question: Does improving animal rights now improve the far future? by Evira.

Comment by michaelstjules on How should longtermists think about eating meat? · 2020-05-17T18:32:20.147Z · score: 32 (19 votes) · EA · GW

To summarize:

Eating meat -> narrower moral circle not sufficiently valuing the welfare of artificial sentience and/or wild animals -> existential risks (mostly suffering risks)


Animals and longtermism (although not specifically about your own diet):

Comparing diet to charity (often older charity cost-effectiveness estimates):

Animal charities and interventions, with newer estimates:

See also the comments.

I think there are also psychological effects of eating meat that might cause people to not give animals the moral weight they would think they deserve upon careful reflection.

Comment by michaelstjules on Physical theories of consciousness reduce to panpsychism · 2020-05-10T21:54:01.903Z · score: 2 (1 votes) · EA · GW

Some good discussion ended up on Facebook here.

Comment by michaelstjules on Conditional Donations · 2020-05-09T21:02:21.878Z · score: 10 (8 votes) · EA · GW

"Donor coordination" is used, and Google turns up some results that seem relevant. There's been writing in EA on this topic; here's some on GiveWell's website. Relatedly, Open Philanthropy Project tries not to fund more than half of its grantees' budgets.

Comment by michaelstjules on Opinion: Estimating Invertebrate Sentience · 2020-05-09T19:41:19.970Z · score: 4 (3 votes) · EA · GW

Some neuroscience-based arguments against pain in

Comment by michaelstjules on What should Founders Pledge research? · 2020-05-09T18:54:37.721Z · score: 9 (2 votes) · EA · GW

I ended up looking at some theories of consciousness and wrote Physical theories of consciousness reduce to panpsychism. Brian Tomasik has also of course written plenty about panpsychism, and I reference some of his writing.

Comment by michaelstjules on Physical theories of consciousness reduce to panpsychism · 2020-05-09T09:32:20.354Z · score: 2 (1 votes) · EA · GW
You might still think there needs to be some level of complexity within your system to approach a level of valenced conscious experience anything like that which you and I are familiar. Even if there's no arbitrary "complexity cut-off", for "processes that matter morally" do we care about elemental systems that might have, quantitatively, a tiny, tiny fraction of the conscious experience of humans and other living beings?

I think we couldn't justify not assigning them some value with such an approach, even if it's so little we can ignore it (although it could add up).

To be a bit more concrete about it (and I suspect you agree with me on this point): when it comes to thinking about which animals have valenced conscious experience and thus matter morally, I don't think panpsychism has much to add - do you? To the extent that GWT, HOT, or IIT ends up being confirmed through observation, we can then proceed to work out how much of each of those experiences each species of animal has, without worrying how widely that extends out to non-living matter.

I agree, and I think this could be a good approach.

My reading leading up to this post and the post itself were prompted by what seemed to be unjustifiable confidence in almost all nonhuman animals not being conscious. Maybe a more charitable interpretation or a steelman of these positions is just that almost all nonhumans animals have only extremely low levels of consciousness compared to humans (although I'd disagree with this).

Comment by michaelstjules on Physical theories of consciousness reduce to panpsychism · 2020-05-08T23:33:42.290Z · score: 2 (1 votes) · EA · GW

I've added sections "Related work" and "What about specific combinations of these processes?".

Comment by michaelstjules on Physical theories of consciousness reduce to panpsychism · 2020-05-08T22:16:54.533Z · score: 2 (1 votes) · EA · GW
Does anyone really claim that every recurrent net is conscious? It seems so implausible.

I think IIT supporters would claim this. I don't think most theories or their supporters claim to be panpsychist, but I think if you look at their physical requirements abstractly, they are panpsychist. Actually, Lamme, who came up with Recurrent Processing Theory, claims that it, IIT and GNWT endorse panpsychism here, and it seems that he really did intend for two neurons to be enough for recurrent processing:

Current models of consciousness all suffer from the same problem: at their core, they are fairly simple, too simple maybe. The distinction between feedforward and recurrent processing already exists between two reciprocally connected neurons. Add a third and we can distinguish between ‘local’ and ‘global’ recurrent processing. From a functional perspective, processes like integration, feature binding, global access, attention, report, working memory, metacognition and many others can be modelled with a limited set of mechanisms (or lines of Matlab code). More importantly, it is getting increasingly clear that versions of these functions exist throughout the animal kingdom, and maybe even in plants.

1. In a more limited form applying to basically all animals and possibly plants, too, but I think his view of what should count as a network or processing might be too narrow, e.g. why shouldn't an electron and its position count as a neuron and its firing?

Comment by michaelstjules on Physical theories of consciousness reduce to panpsychism · 2020-05-08T02:59:59.172Z · score: 2 (1 votes) · EA · GW

I also think these objections will apply to panpsychism generally and any precise physical requirements that don't draw arbitrary lines. In particular, they apply to the other proposed requirements Luke describes in 6.2. Combining precise physical requirements in specific ways, e.g. attention feeds into working memory which feeds into a process that models/predicts its own behaviour/attention, won't really solve the problem, if each of those requirements are so ubiquitous in nature to nonzero degree under attempts to make them precise that specific combinations of them will happen to be, too.

Comment by michaelstjules on Physical theories of consciousness reduce to panpsychism · 2020-05-08T01:20:59.162Z · score: 3 (2 votes) · EA · GW

There's still an ongoing debate as to whether or not the prefrontal cortex is necessary for consciousness in humans, with some claiming that it's only necessary for report in humans:

https://plato.stanford.edu/entries/consciousness-neuroscience/#FronPost

https://www.jneurosci.org/content/37/40/9603

https://www.jneurosci.org/content/37/40/9593.full

https://onlinelibrary.wiley.com/doi/abs/10.1111/mila.12264

Whether or not it is necessary for consciousness in humans could decide whether or not many nonhuman animals are conscious, assuming the kinds of processes happening in the prefrontal cortex are somehow fairly unique and necessary for consciousness generally, not just in humans (although I think attempts to capture their unique properties physically will probably fail to rule out panpsychism non-arbitrarily, like in this post).

Comment by michaelstjules on Physical theories of consciousness reduce to panpsychism · 2020-05-07T19:41:09.083Z · score: 3 (2 votes) · EA · GW

I agree that IIT doesn't seem falsifiable since there's no way to confirm something isn't conscious, and that's an important objection, because there probably isn't consciousness without information integration. At least with other the theories I looked at, we could in principle have some confidence that recurrence or attention or predicting lower order mental states probably isn't necessary, even though there are no sharp lines between processes that are doing these things and those that aren't, and the ones that do to nonzero degree seem ubiquitous. But these processes can only really be ruled out as necessary if they are not necessary for eventual report.

Do I need to be able to eventually report (even just to myself) that I experienced something to have actually experienced it? This also seems unfalsifiable. So processes required for eventual report (the ones necessarily used during experiences that are eventually reported, but not necessarily the ones used during the report itself) can't be ruled out as unnecessary, and I'm concerned that the more complex theories of consciousness are approaching theories of reportability (in humans), not necessarily theories of consciousness. No report paradigms only get around this through the unfalsifiable assumption that reflexive behaviours correlated with report (under certain experimental conditions) actually indicate consciousness in the absence of report.

So, IIT accepts basically everything as conscious, while reportability requirements can rule out basically everything except humans (and maybe some "higher" animals) under specific conditions (EDIT: actually, I'm not sure about this), both are unfalsifiable, and basically all other physical theories with academic supporters fall between them (maybe with a few extra elements that are falsifiable), and therefore also include unfalsifiable elements. Choosing between them seems like a matter of intuition, not science. Suppose we identified all of the features necessary for reportability. Academics would still be arguing over which ones among these are necessary for consciousness. Some would claim all of them are, others would still support panpsychist theories, and there doesn't seem to be any principled way to decide. They'd just fit their theories to their intuitions of which things are conscious, but those intuitions aren't reliable data, so this seems backwards.

One skeptical response might be that reportability is required for consciousness. But another skeptical response is that if you try to make things precise, you can't rule out panpsychism non-arbitrarily, as I illustrate in this post.

Slightly weaker than report and similar to reportability, sometimes "access" is considered necessary (consciousness is access consciousness, according to Dennett). But access seems to be based on attention or global workspaces, and imprecisely defined processes that are accessing them, and I argue in this post that attention and global workspaces can be reduced to ubiquitous processes, and my guess is that the imprecisely defined processes accessing them aren't necessary (for the same reasons as report) or attempts to define them in precise physical terms will also either draw arbitrary lines or lead to reduction to panpsychism anyway.

Here are some definitions of access consciousness:

Access consciousness: conscious states that can be reported by virtue of highlevel cognitive functions such as memory, attention and decision making.

https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(11)00125-2

A perceptual state is access-conscious, roughly speaking, if its content - what is represented by the perceptual state - is processed via that information-processing function, that is, if its content gets to the Executive System, whereby it can be used to control reasoning and behavior.
(...)
A state is access-conscious (A-conscious) if, in virtue of one's having the state, a representation of its content is (1) inferentially promiscuous (Stich 1978), that is, poised for use as a premise in reasoning, (2) poised for rational control of action, and (3) poised for rational control of speech. (I will speak of both states and their contents as A-conscious.) These three conditions are together sufficient, but not all necessary. I regard (3) as not necessary (and not independent of the others), because I want to allow that nonlinguistic animals, for example chimps, have A-conscious states. I see A-consciousness as a cluster concept, in which (3) - roughly, reportability - is the element of the cluster with the smallest weight, though (3) is often the best practical guide to A-consciousness.

http://www.nyu.edu/gsas/dept/philo/faculty/block/papers/1995_Function.pdf

Comment by michaelstjules on Physical theories of consciousness reduce to panpsychism · 2020-05-07T17:05:54.270Z · score: 5 (3 votes) · EA · GW

Relevant:

By saying an electron is conscious too (although I doubt an isolated electron on its own should be considered conscious, since there may be no physical process there), we may need to expand our set of moral patients considerably. It's possible an electron is conscious and doesn't experience anything like suffering, pleasure or preferences (see this post), but then we also don't (currently, AFAIK) know how to draw lines between suffering and non-suffering conscious processes.

Comment by michaelstjules on Physical theories of consciousness reduce to panpsychism · 2020-05-07T16:47:31.282Z · score: 2 (1 votes) · EA · GW

I don't think the theories I looked at can conclude they're not without making arbitrary distinctions in matters of degree rather than kind. Most of the theories themselves make such arbitrary distinctions in my view; maybe all of them except IIT?

Comment by michaelstjules on Physical theories of consciousness reduce to panpsychism · 2020-05-07T16:34:11.741Z · score: 2 (1 votes) · EA · GW
I your description of how complex physical processes like global attention / GWT to simple ones like feedforward nets.

I think you missed a word. :P

E.g. to describe a recurrent net as a feedforward net you need a ridiculous number of parameters (with the same parameter values in each layer).

That's true, but there's no good line to draw for the number of iterations, so this seems more a matter of degree than kind. (I also don't see why the parameter values should be the same, but maybe this could be important. I wrote that I found this unlikely, but not extremely unlikely.)

So that doesn't imply that the universe is full of recurrent nets (even if it were full of feedforward nets which it isn't).

I do think the universe is full of both, see Brian's comment. E.g. an electron influences other particles which in turn influence the electron again.

To draw a caricature of your argument as I understand it: It turns out computers can be reduced to logic gates. Therefore, everything is a computer.

Basically. The claims these theories are making are that certain kinds of physical processes are required (perhaps in certain ways), but these processes are ubiquitous (so will often be "accidentally" arranged in those certain ways, too), although to much lower degrees. It's like "Computers are physical systems made up of logic gates. Logic gates are everywhere, so computers are everywhere." Their necessary conditions that can be explained in physical terms are too easy to meet.

Or another caricature: Recurrent nets are a special case of {any arrangement of atoms}. Therefore any arrangement of atoms is an RNN.

This would of course be invalid logic on its own. I'd say that ubiquitous feedforward processes simulate recurrent ones to shallow recurrence depth.

On further reflection, though, I think or may be better called recurrent than just , since the latter only includes one of each of and .

(I do think most actual local (in the special/general relativity sense) arrangements of atoms have recurrence in them, though, as long as the atoms' relative positions aren't completely fixed. I expect feedback in their movements due to mutual influence.)

Comment by michaelstjules on Physical theories of consciousness reduce to panpsychism · 2020-05-07T15:51:31.089Z · score: 4 (3 votes) · EA · GW
I'm not sure if I understand the argument correctly, but what would you say to someone who cites the "fallacy of division"? For example, even though recurrent processes are made of feedforward ones, that doesn't mean the purported consciousness of the recurrent processes also applies to the feedforward parts. My guess is that you'd reply that wholes can sometimes be different from the sum of their parts, but in these cases, there's no reason to think there's a discontinuity anywhere, i.e., no reason to think there's a difference in kind rather than degree as the parts are arranged.

I basically agree. I think there are no good lines to draw anywhere so it seems to me to be a difference of degree, although I'd guess we can propose minimal isolated systems that are not conscious, perhaps an isolated electron, but that kind of isolation seems rare (maybe impossible?) in the real world.

That being said, I don't think the physical theories have picked out precise properties of "wholes" that don't apply to small ubiquitous systems, just to lesser degrees.

Consider a table made of five pieces of wood: four legs and a top. Suppose we create the table just by stacking the top on the four legs, without any nails or glue, to keep things simple. Is the difference between the table versus an individual piece of wood a difference in degree or kind? I'm personally not sure, but I think many people would call it a difference in kind.

I think people either don't have a precise definition in mind when they think of tables, or if they do, have something in mind that would specifically rule this out. Or they'll revise their definition when presented with such an example: "Oh, but the legs have to be attached!" Of course, what do they mean by legs?

I think an alternate route to panpsychism is to argue that the electron has not just information integration but also the other properties you mentioned. It has "recurrent processing" because it can influence something else in its environment (say, a neighboring electron), which can then influence the original electron. We can get higher-order levels by looking at one electron influencing another, which influences another, and so on. The thing about Y predicting X would apply to electrons as well as neurons.

Agreed. Good point.

Comment by michaelstjules on The Alienation Objection to Consequentialism · 2020-05-06T20:51:16.283Z · score: 2 (1 votes) · EA · GW

I think we're taking impartiality for granted here. Consequentialism doesn't imply impartiality.

Comment by michaelstjules on The Alienation Objection to Consequentialism · 2020-05-05T23:37:37.137Z · score: 4 (3 votes) · EA · GW
One thought to have about this case is that you have the wrong motivation in visiting your friend. Plausibly, your motive should be something like ‘my friend is suffering; I want to help them feel better!’ and not ‘helping my friend has better consequences than anything else I could have done.’ Imagine what it would be like to frankly admit to your friend, “I’m only here because being here had the best consequences. If painting a landscape would have led to better consequences, I would have stayed home and painted instead.” Your friend would probably experience this remark as cold, or at least overly abstract and aloof. They might have hoped that you’d visit them because you care about them and about your relationship with them, not because their plight happened to offer you your best opportunity for doing good that afternoon. To put things another way, we might say that your motivation alienates you from your friend.

Can't this example be generalized and used against any ethical theory that values more than just that one friend and their wellbeing, which is basically every plausible ethical theory? You have to weigh reasons against one another, so all theories could be framed to respond like "I’m only here because I had the most all-things-considered reason to be here. If I had more all-things-considered reason to paint a landscape, I would have stayed home and painted instead."

Impartial consequentialist theories weigh reasons in particular ways, and, as you point out, don't recognize certain ideals like friendship terminally that we perhaps should, which is what alienation is about (although your friend's welfare is a consideration!).

I guess this is more of a response to that particular example and its framing, not to say that impartial consequentialism isn't more alienating than other theories.

Comment by michaelstjules on The Alienation Objection to Consequentialism · 2020-05-05T22:10:47.989Z · score: 3 (2 votes) · EA · GW
Hm, to my ear, prioritizing a friend just because you happen to be biased towards them is more circumstantial. It's based on accidents of geography and life events that led you to be friends with that person to a greater degree than with other people you've never met.

That's a good point. I think one plausible-sounding response is that while the friendship itself was started largely circumstantially, the reason you maintain and continue to value the relationship is not so circumstantial, and has more to do with your actual relationship with that other person.

It's quite difficult to reconcile with my revealed priorities as someone who definitely doesn't live up to my own consequentialism, yes, but I bite the bullet that this is really just a failure on my part (or, as you mention, the "instrumental" reasons to be a good friend also win over anyway).

If you do think it is a failure on your part, then belief that it's the best thing you could be doing isn't the reason, and isn't one reason actually special concern for your friend or your relationship with them? I suppose the point is that you don't recognize that reason as an ethical one; it's just something that happens to explain your behaviour in practice, not what you think is right.

Comment by michaelstjules on The Alienation Objection to Consequentialism · 2020-05-05T21:44:36.736Z · score: 4 (2 votes) · EA · GW

Are there better reasons to value a relationship than because it allows you or the other to do more good or be a better person? This seems like it could be the best reason to value a relationship, because it's unselfish. And it doesn't seem that alienating to me.

I suppose the point is that there shouldn't be a reason, and we should just value the relationship in itself. But then we're left with taking that as an axiom without justification (or else that justification would be a reason). And are we sure we aren't just being selfish or trying to justify selfishness by giving relationships special status to avoid much more demanding moral obligations?