Posts

Introducing the Stock Issues Framework: The INT Framework's Cousin and an "Advanced" Cost-Benefit Analysis Framework 2020-10-03T07:18:54.045Z · score: 11 (7 votes)

Comments

Comment by harrison-d on Timeline Utilitarianism · 2020-10-10T07:02:39.412Z · score: 2 (2 votes) · EA · GW

(In light/practice of advice I've read to just go ahead and comment without always trying to write something super substantive/eloquent, I'll say that) I'm definitely interested in this idea and in evaluating it further, especially since I'm not sure I really thought about this in an explicit way before (since I generally just think "average per each person/entity's aggregate [over time] vs. sum aggregate of all entities," without focusing that much on a distinction between an entity's aggregate over time and that same entity's average over time). Such an approach might have particular relevance under models that take a less unitary/consistent view of human consciousness. I'll have to leave this open and come back to it with a fresh/rested mind, but for now I think it's worth an upvote for at least making me recognize that I may not have considered a question like this before.

Comment by harrison-d on Sortition Model of Moral Uncertainty · 2020-10-06T02:40:47.948Z · score: 1 (1 votes) · EA · GW

I think you highlight some potentially good pros for this approach and I can't say I've thoroughly analyzed this approach. However, quite a few of those pros seem non-unique to this particular model of moral uncertainty vs. other frameworks that acknowledge uncertainty and try to weigh the significance of the scenarios against each other. For example, such models already have the pros related to "It stops a moral theory from dominating...," "it makes you less fanatical," etc. (but there are some seemingly unique "pros," such as "It has no need for intertheoretic comparisons of value").

Still, I am highly skeptical of such a model even in comparison to just simply "going with whatever you are most confident in" because of things like complexity (among other things). More importantly, I think this model has a few serious problems along the lines of failing to weight the significance of the situation and thus wouldn't perform well under basic expected value tests (which you might have been getting at with your point about choosing theories with low "stake"): suppose your credences are 50% average utilitarian, 50% total utilitarian. You are presented with a situation where choice A mildly improves average utility such as by severely restricting some population's growth rate (imagine it's for animals)--but this is drastically bad from a total utilitarian viewpoint in comparison to choice B (do nothing / allow the population to rise). To use simple numbers, we could be talking about choice A = +5,-100 (utility points under "average, total"), vs. choice B = 0,0. If the decisionmaker is operating on average utilitarianism, it would be drastically bad. This is why (to my understanding), when your educated intuition says you have the time, knowledge, etc. to do some beneficial analysis, you should try to weight and compare the significance of the situations under different moral frameworks.

Comment by harrison-d on Denise_Melchin's Shortform · 2020-10-03T18:54:06.957Z · score: 1 (1 votes) · EA · GW

Perhaps comments/posts should have more than just one "like or dislike" metric? For example, it could have upvoting or downvoting in categories of "significant/interesting," "accurate," "novel," etc. It also need not eliminate the simple voting metric if you prefer that.

(People may have already discussed this somewhere else, but I figured why not comment--especially on a post that asks if we should engage more?)

Comment by harrison-d on Factors other than ITN? · 2020-10-03T08:44:32.556Z · score: 1 (1 votes) · EA · GW

I'm not sure if it directly answers your question, but this question did finally lead me to write the post about the stock issues framework (which seems to be listed in the pingbacks). I hope that is relevant to your question!

Comment by harrison-d on A Toy Model of Hingeyness · 2020-09-12T21:23:07.137Z · score: 2 (2 votes) · EA · GW

I think those changes help clarify things! I just didn't quite understand your intent with the original wording/heading. I think it is a good idea to try to highlight the potential different definitions for the concept, as well as issues with those definitions.

Comment by harrison-d on A Toy Model of Hingeyness · 2020-09-10T18:36:38.112Z · score: 1 (1 votes) · EA · GW

(Edit 2/note: the OP's edits in response to this comment render this comment fairly irrelevant except as a more detailed explanation for why defining hingeyness in terms of total possible range (see: "2. Older decisions are hingier?") doesn't seem to make much sense/be very useful as a concept)

Apologies in advance if I'm misunderstanding your point; I've never analyzed "hingeyness" much, and so I'm not trying to advance a theory or necessarily contest your overall argument. However, one thing you said doesn't sit well with me--namely, the part where you argue that older decisions are necessarily hingier, and that is part of why you think the definition regarding the "Hinge of History" is not very helpful. I can think of lots of situations, both real and hypothetical, where a decision at time X (say, "year 1980" or "turn 1") has much less effect on both direct utility and future choices than a decision or set of decisions at time Y (say, "year 1999" or "turn 5"), in part because decision X may have (almost) no effect on the choices/options much later (e.g., it does not affect which options are available, it does not affect what effect the options have).

Take for hypothetical example a game where you are in a room with four computers, each labeled by a number (1-4). At the start of the game (point 1), only computer 1 is usable, but you can choose option 1a or option 1b. The following specifics don't matter much for the argument I'm making, but suppose 1a produces +5 utility and turns on computer 2, and option 1b produces +3 utility and turns on computer 3. (Suppose computer 2 and computer 3 have options with utility in the ranges of +1 to +10.) However, regardless of what you do at point 1--whether you press either 1a or 1b--computer 4 also turns on. This is point 2 in the game. On computer 4, you have option 4a which produces -976,000 utility, and option 4b produces +865,000 utility. And then the game ends.

This paragraph is unnecessary if you understand the previous paragraph, but for a more real-world example, I would point to (original) Quiplash: although not as drastic as the hypothetical above, my family and I would often complain that the game was a bit unbalanced/frustrating due to how your performance/success really hinged on second phase of the game. The game has three phases, but the points in phase 2 are worth double those in phase 1, and (if I remember correctly) it was similarly much more important than phase 3. Your performance in phase 1 would not really/necessarily affect how well you did in later phases (with unimportant exceptions such as recurring jokes/figuring out what the audience likes).

I recognize that "*technically*" you may be able to represent such situations game-tree-theoretically by including it as a timeline with every possible permutation, but I would argue that doing so loses much of the theoretical idea(s) that the conceptualization of hingeyness (if not also some game theory models) ought to address: that some decisions' availability and significance are relatively independent of other decisions. My choices at time "late lunch today" between eating a sandwich and a bowl of soup could technically be put on the same decision tree as my choices at time "(a few months from now)" between applying to grad school or applying to an internship, but I feel that the latter time should be recognized as more "Hingey."

Edit 1: I do think that you begin to get at this issue/idea when you go into point 3, about decreases in range, I just still take issue with statements like "Older decisions are hingier." If you were just posing it as a claim to challenge/test (and decided that it was incorrect/that it means we should define hingeyness in that way), I may have just misinterpreted as a claim or a conceptualization of hingeyness that you were trying to argue for.