Epistemic status: Attempt to clarify a vague concept. This should be seen as a jumping of point and not as a definitive model.
1. Definition of Hingeyness
The Hinge of History refers to a time when we have an unusually high amount of influence over the future of civilization, compared to people who lived in the eras before and after ours.
I will use the model I made for my previous question post [EA · GW] to explain why I don't think this definition is very useful. As before, in this model are only two possible choices per year. The number inside the circle refers to the amount of utility that year experiences and the two lines are the two options that this year has to decide on. The amount of utility which each option will add to the next year is written next to the lines. (link to image)
2. Older decisions are hingier?
I think we all agree that we should try to avoid the option that will lead to better results in the next year, but will create less utility in the long run. In this model the year with 1 utility could choose the +2 option, but it should choose the +1 option because it leads to better options next year. Let's assume that all life dies after the last batch of years. The 1 utility then 3 utility then 0 utility option is the worst because you've generated 4 utility in total. 1-3-6 is just as good as 1-2-7, but 1-2- 8 is clearly the best path.
One way to interpret the definition of "the hinge of history" is to quantify it as "the range of total amount of utility you can potentially generate". Under this definition later decisions are never hingier than earlier ones. 1 gets a range of options that ranges from 4 utility to 11 utility, no other option get's that kind of range. In fact, it's mathematically impossible that future decisions have a range of options that's larger than the previous decisions had (assuming the universe will end and isn't some kind of loop). It's also mathematically impossible that future decisions have ranges where the best and worst case scenarios give you more utility than the range of the previous years. This is, unless negative utility is possible, which might arguably exist when you have a universe of beings being kept alive and tortured against their will (but it's rare in any case).
3. Decrease in range
Does that mean that hingeyness is now a useless concept? Not necessarily. The range will never grow, but the amount by which it narrows from year to year varies widely. Let's look at an extreme example. (link to image)
So the decisions made in 1 will always have the broadest range [204-405], but if you look at the difference in range between 3 [203-311] and 4 [208-311] it's not that much. So hingeyness may still be useful to think about how quickly [EA(p) · GW(p)] our range is decreasing. It's even possible that the range doesn't shrink at all.
4. Going extinct quickly isn't necessarily bad
In the previous post I said that choosing for times where we survive for longer is almost always better (assuming you're a positive utilitarian and negative utility is impossible), this is an example of when this is not the case. The 1-2-402 chain gives the world the most utility even though it goes extinct one tick quicker. We (naturally) focus on reducing x-risk, but I wanted to visualize here why it might be possible that dying quickly in a blaze of utility is better than fizzling on for longer with low amounts of utility (especially if negative utility is possible). Although it should be noted that this model gives you clear ticks which might not exist in real life. Maybe planck time? Or maybe the time it takes to go from one state of pleasure to another a.k.a the time it takes to fire a neuron? Depending on how you answer that question this argument might fall flat.
5. Is hingeyness related to slack?
I'm starting to see similarities between the range of possible choices you keep and the amount of slack [LW · GW]. I previously expressed that I see the slack/moloch [LW · GW] trade-off as similar to the exploration/exploitation trade-off. Since we can't accurately predict which branches will give us the most utility it might be useful to keep a broad range of options open a.k.a to give yourself a lot of slack. In fact if we look at the first image you can see that someone who is pursuing linear utility exploitation will go from 1 to 3 (giving himself a +2 instead of a +1). Since this gives you worse results later this is basically the same thing as moloch pushing you into an inadequate equilibria [? · GW]. Having the slack/exploration to choose a sub-optimal route in the short-run but a better route in the long-run can only work if you have a lot of hingeyness.
6. How probability fits in
In reality of course you get more than two options, but the principle stays the same. Instead of a range you get a probability distribution. (link to image)
The probability that you get a certain amount of utility is equal to the amount of chains that generate that specific amount of utility (If you think certain chains have inherently less chance of existing you can just multiply the two factors). The range we are talking about is the difference between the lowest amount you could possibly generate and the highest. This will always either stay the same or shrink. This is not necessarily a bad thing as a we would rather face a narrow range of options between several good outcomes than a broad range of options between a lot of bad outcomes. But what about a distribution that looks like this (link to image):
This is what I think a lot of people think about when we talk about the hinge of history; a time in history where the decisions we make can either turn out to have very good outcomes or very bad outcomes with very little in between. Our range may be smaller than the previous eras, but the probability that we either gain or lose lots of utility have never been higher. I won't decide what the "true definition of hingeyness" is since language belongs to it's users. I'm just pointing out that "the range of total amount of utility generated", "how quickly that range is decreasing" and "how polarized the probability distribution is" are very different concepts and we should probably have different labels for them. I will suggest three in the conclusion.
7. How much risk should we take?
I previously asked:
When you are looking at the potential branches in the future, should you make the choice that will lead you to the cluster of outcomes with the highest average utility or to the cluster with the highest possible utility?
I'd say the one with the highest average utility if they are all equally likely. Basically, go with the one with the highest expected value.
But what about the cluster of branches with the median amount of utility, or mode or whatever? I don't think these questions have one definitively correct answer. Instead I would argue that we should use meta-preference utilitarianism [EA · GW] to choose the options that most people want to choose.
There are three concepts that could be described as Hingeyness:
1) The range of the amount of utility you can potentially generate with your decision (maybe call it 'hinge broadness'?)
2) How much that range will narrow when you make a decision (maybe call it 'hinge reduction'?)
3) How polarized the probability is that you get either a lot or very little utility in the future (maybe call it 'hinge precipiceness [EA · GW]'?)
EDIT: I describe a fourth type of hingeyness in the comments
Having lot's of "hinge broadness" is crucial for having slack. This toy model can be used to visualize all of these concepts.
I quite like this model, it seems natural to me that to quantify hingeyness in this model would be by considering how different the probability distributions of the total utility at the end of time are. Things like how much the range contracts also seem to be decent quick approximations of this difference. There has actually been a lot of work on quantifying this difference between probability distributions, searches for statistical distance or “probability metrics” should give you results.
If we were to define hingeyness in this model using some notion of the distance between probability distributions it seems likely we would want this distance to have the properties of a metric. It’s not obvious to me which metric would be the best choice for this model though. The Wasserstein metric seems the easiest metric from the above link to implement to me.
Once again I really like this model Bob. I'm pretty excited to see how this model changes with even more time to iterate. I'd never come across the formalised idea of slack before and I think it describes a lot of what was in my head when responding to your last post!
I'm wondering how you've been thinking about marginal spending in this model? I.e, If we're patient philanthropists, which choices should we spend money on and which should we save once we factor in that some choices are easier to affect than others? For example, one choice might be particularly hingey under any of your proposed definitions but be very hard for a philanthropist to affect; e.g the decisions made by one person who might not have heard of EA (either as a world leader or just being coincidentally important). We probably won't get a great payoff from spending a load of money/effort identifying that person and would prefer to avoid that path down the decision tree entirely.
I guess the thrust of the question here is how we might account for tractability of affecting choices in this model. Once tractability is factored in me might prefer to spend a little money affecting lost of small choices which aren't as hingey under this definition rather than spending a lot affecting one very hingey choice. If this is the case I think we'd want to redefine hingeyness to match our actual decision process.
It seems like each edge on the tree might need a probability or cost rating to fully describe real-world questions of traceability but I'd be very interested in yours or others thoughts.
(Edit 2/note: the OP's edits in response to this comment render this comment fairly irrelevant except as a more detailed explanation for why defining hingeyness in terms of total possible range (see: "2. Older decisions are hingier?") doesn't seem to make much sense/be very useful as a concept)
Apologies in advance if I'm misunderstanding your point; I've never analyzed "hingeyness" much, and so I'm not trying to advance a theory or necessarily contest your overall argument. However, one thing you said doesn't sit well with me--namely, the part where you argue that older decisions are necessarily hingier, and that is part of why you think the definition regarding the "Hinge of History" is not very helpful. I can think of lots of situations, both real and hypothetical, where a decision at time X (say, "year 1980" or "turn 1") has much less effect on both direct utility and future choices than a decision or set of decisions at time Y (say, "year 1999" or "turn 5"), in part because decision X may have (almost) no effect on the choices/options much later (e.g., it does not affect which options are available, it does not affect what effect the options have).
Take for hypothetical example a game where you are in a room with four computers, each labeled by a number (1-4). At the start of the game (point 1), only computer 1 is usable, but you can choose option 1a or option 1b. The following specifics don't matter much for the argument I'm making, but suppose 1a produces +5 utility and turns on computer 2, and option 1b produces +3 utility and turns on computer 3. (Suppose computer 2 and computer 3 have options with utility in the ranges of +1 to +10.) However, regardless of what you do at point 1--whether you press either 1a or 1b--computer 4 also turns on. This is point 2 in the game. On computer 4, you have option 4a which produces -976,000 utility, and option 4b produces +865,000 utility. And then the game ends.
This paragraph is unnecessary if you understand the previous paragraph, but for a more real-world example, I would point to (original) Quiplash: although not as drastic as the hypothetical above, my family and I would often complain that the game was a bit unbalanced/frustrating due to how your performance/success really hinged on second phase of the game. The game has three phases, but the points in phase 2 are worth double those in phase 1, and (if I remember correctly) it was similarly much more important than phase 3. Your performance in phase 1 would not really/necessarily affect how well you did in later phases (with unimportant exceptions such as recurring jokes/figuring out what the audience likes).
I recognize that "*technically*" you may be able to represent such situations game-tree-theoretically by including it as a timeline with every possible permutation, but I would argue that doing so loses much of the theoretical idea(s) that the conceptualization of hingeyness (if not also some game theory models) ought to address: that some decisions' availability and significance are relatively independent of other decisions. My choices at time "late lunch today" between eating a sandwich and a bowl of soup could technically be put on the same decision tree as my choices at time "(a few months from now)" between applying to grad school or applying to an internship, but I feel that the latter time should be recognized as more "Hingey."
Edit 1: I do think that you begin to get at this issue/idea when you go into point 3, about decreases in range, I just still take issue with statements like "Older decisions are hingier." If you were just posing it as a claim to challenge/test (and decided that it was incorrect/that it means we should define hingeyness in that way), I may have just misinterpreted as a claim or a conceptualization of hingeyness that you were trying to argue for.
The reason I find the definition not very useful is because it can be interpreted in so many different ways. The aim of this post was to show the four main ways you could interpreted it. When I read the definition my first interpretation was “hinge broadness”, while I suspect your interpretation was “hinge reduction”. I’m not saying that hinge broadness is the ‘correct’ definition of hingeyness, because there is no ‘correct’ definition of hingeyness until a community of language users has made it a convention. There is no convention yet so I’m purposefully splitting the concept into more quantifiable chunks in the hope that we can avoid the confusion that comes from multiple people using the same terms for different concepts. Since I failed to convey this I will slightly edit this post to clear it up for the next confused reader. I added one sentence, and tweaked another sentence and a subtitle. The old version of the post can be found on LessWrong [LW · GW].
I think those changes help clarify things! I just didn't quite understand your intent with the original wording/heading. I think it is a good idea to try to highlight the potential different definitions for the concept, as well as issues with those definitions.
Here the "range of possible utility in endings" tick 1 has (the first 10) is [0-10] and the "range of possible utility in endings" the first 0 has (tick 2) is [0-10] which is the same. Of course the probability has changed (getting an ending of 1 utility is not even an option anymore), but the minimum and maximum stay the same.
But we don't care about just the endings, we care about the rest of the journey too. The width of the "range of the total amount of utility you could potentially experience over all branches (not just the endings)" can shrink or stay the same. But the range itself can shift. For example the lowest possible utility tick 1 can experience is 10->0->0 = 10 utility and the highest possible utility that it can experience is 10->0->10 = 20 utility. The difference between the lowest and highest is 10 utility. The lowest total utility that the 0 on tick 2 can experience is 0->0 = 0 utility and the highest is 0->10 = 10 utility, which is once again a difference of 10 utility.
The probability has changed: Ending with a weird number like 19 is impossible for the '0 on tick 2'. The probability for a good ending has also become much more favorable (50% chance to end with a 10 instead of 25% it was before). Probability is important for the precipiceness.
But while the width of the range stayed the same, the range itself has shifted downwards from [10-20] to [0-10]. Maybe this also an important factor in what some people call hingeyness? Maybe call that 'hinge shift'?
This will effect the probability that you end up in certain futures and not others. I used the word precipiceness in my post to refer to high-risk high-reward probability distributions. Maybe it's also important to have a word for a time in which the probability that we will generate low amounts of utility in the future is increasing. We call this "increase in x-risk" now because going extinct is most of the time a good way to ensure you will generate low amounts of utility. But as I showed in my post, you can have an awesome extinction and a horrible long existence. Maybe I shouldn't be trying to attach words to all the different variants of probability distributions and just draw them instead.
To recap "the range of total amount of utility you can potentially generate" aka "hinge broadness" can:
1) Shrink by a certain amount (aka hinge reduction) this can be because the most amount of utility you can potentially generate is decreasing (I'll call this "top-reduction") or because the least amount of utility you can potentially generate is increasing (I'll call this "bottom-reduction"). Top-reduction is bad, bottom-reduction is good.
2) Shift upward or downward in utility by a certain amount (aka hinge shift) Upward shift is good, downward shift is bad.