Posts

The TUILS Framework for Improving Pro-Con Analysis 2021-04-08T01:37:29.756Z
Harrison D's Shortform 2020-12-05T04:10:16.021Z
[Outdated] Introducing the Stock Issues Framework: The INT Framework's Cousin and an "Advanced" Cost-Benefit Analysis Framework 2020-10-03T07:18:54.045Z

Comments

Comment by Harrison D on On Mike Berkowitz's 80k Podcast · 2021-04-22T01:53:24.935Z · EA · GW

1, The “watering down” comment was really referring to the idea of expanding the “preference axis” assumption to include more than just policy, to the extent that MVT changes from “Politicians moderate their policies towards the center of policy axes” (which would be a perhaps unintuitive claim that doesn’t need explicit reference to “MVT”) to “Politicians appeal to the majority of the voting public” (which is almost “no-duh” except that it irons over potential wrinkles, like “someone who is very far right/left won’t even bother voting unless one of the candidates moves far enough towards them, rather than spending their time to go out and vote for the ‘less bad candidate’ in an election which they might deep down recognize they won’t actually have any impact on...”). Ultimately, I think the question of whether Berkowitz should have discussed MVT by name is less important that the question of MVT’s validity, but I’m not in an epistemic position to get deeper into the weeds on that. 😶 2. I still don’t see that as a true “silver bullet”; I imagine Berkowitz might consider it one of the potential positive reforms, though.

Comment by Harrison D on On Mike Berkowitz's 80k Podcast · 2021-04-21T20:42:09.647Z · EA · GW

Thanks for the responses. To go through the points you mention:

  1. I’m just not that convinced that the MVT is akin to the gas price situation you described, in that I don’t see it as that explanatory/fundamental/crucial to mention (in combination with the following remarks). Importantly, as part of this I’ll say that it seems like you’re watering down the MVT to increasingly become “politicians try to appeal to the majority of people,” which is arguably far more intuitive and thus less necessary to cover in name/detail. As I understood it, the MVT is more meant to explain why politicians converge to more-moderate policy preferences in order to win over the “median” voter. So if you’re just going to say (e.g.) that “candidates were less inclined to be anti-Trump because a majority of people wanted to vote against the anti-trump candidates,” you don’t need to mention the phrase “Median voter theorem” any more than you actually have to mention “supply and demand ** curves/graphs ** “ (as opposed to just “supply and demand”).
  2. I’m still not convinced there is a silver bullet, and my base rate for “situations where there is actually a silver bullet despite the dismissal by people who are more experienced than me, yet it’s just not getting used” is really low. Thus, I’m inclined to side with Berkowitz on this—including his observation that there are definitely multiple helpful reforms that could be taken.
  3. If the situation is as you described it, I definitely think that’s a fair concern—and it’s something that I’ve seen too many pundits do.
Comment by Harrison D on If Bill Gates believes all lives are equal, why is he impeding vaccine distribution? · 2021-04-21T20:11:28.980Z · EA · GW

Although I think it’s unfortunate this comment is so downvoted, I’m not surprised to be honest. As a rhetorical matter more generally, I would recommend two major things:

  1. “Narrow the sale” (and soften the language): perhaps the post and this comment aren’t extremely expansive, but I do think they try challenging too many orthodox beliefs at once and/or otherwise have unnecessary baggage. For example, see the title itself which, like I argued, seems to beg the question. See also the language in the quote: “This struck me as a catastrophic move, turning a vaccine developed by a nonprofit institution into a way to make a company lots of money, with no clear upside.” Instead, you should be very clear up front what 2-3 main claims you are arguing for up front; in this case, I think the main points are things like “we ought to develop an alternative market mechanism for incentivizing R&D and distribution of vaccines”.
  2. Engage more deeply in market-mechanism reasoning or within “market” frameworks, including by agreeing where you need to agree / indicating more clearly that, for example, you understand that patents are better than nothing. For example, if I were in your position I would probably have lead with something like “I am generally in favor of typical market solutions, but in the case of IP regulations and the market for vaccine R&D/production there are specific points of market failure that patents inefficiently address: X, Y, Z. This is why I think one of two alternatives would be better: A1; A2. Because...” ** This helps to make your propositions clearer and more familiar to the primary audience you are trying to persuade: people who would support patents because of market dynamics **. This really also ties into my previous point, about avoiding charged language and a wide array of potential weak spots.
Comment by Harrison D on If Bill Gates believes all lives are equal, why is he impeding vaccine distribution? · 2021-04-21T19:36:07.662Z · EA · GW

The Manhattan Project, Apollo Program, and USPS all illustrate that the government can sometimes fill a role when given enough money/resources to solve a problem, but they aren’t widely-acknowledged examples of efficiency—in fact, the USPS is often criticized as a prime example of government inefficiency. As to the first two, these could be outliers given their nature as technical endeavors during wartime/security environment pressures. I’ll leave the latter half of your comment to the other comments that have already been made by others.

Comment by Harrison D on On Mike Berkowitz's 80k Podcast · 2021-04-21T03:37:52.836Z · EA · GW

I haven't had a chance to listen to the podcast yet, but I'll give a couple first-thought responses.

(0) I don't know why (or even whether) Berkowitz doesn't address the MVT, but my impression is that the assumptions baked into the MVT are out of touch with reality; consider for example that people may rationally vote "irrationally" (see e.g., https://en.wikipedia.org/wiki/The_Myth_of_the_Rational_Voter), since national democracy does not have reliable feedback mechanisms for making good choices with your vote. Of course, that's not to say that MVT is totally wrong; it could be decently "right for the wrong reasons," but I've increasingly heard people argue that the MVT is losing accuracy as politics is increasingly detached from policy effectiveness. Also, just casually glancing over the transcript it seems like Berkowitz indirectly touches on the notion by emphasizing how party primaries encourage selecting more-partisan candidates ("they really talk about the problem of partisan primaries"), although perhaps Berkowitz should have given some more-explicit (i.e., by-name) discussion to this.

(2) This analysis feels really shallow, and I'm not sure it's fair to Berkowitz based on what I'm seeing on the transcript. In fact, looking at a few quotes it even feels a bit misrepresentative of what Berkowitz said. From the transcript: "So I do think political reforms are important here. I’m a little agnostic personally about which ones, we could talk more about this, but I do think some reforms to the system are really key to get at the structures there as well." Further on this, Berkowitz gives counterexamples of where he claims parliamentary democracies have led to "populism", including the UK and Australia; you can't just say "well here are a few examples of where parliamentary democracy has worked very well; it clearly has to do with their structure." (In the case of Japan, I'd immediately suspect there are massive confounding variables, including having one of the most homogeneous populations in the world as well as having been highly economically successful)

(3) I haven't fully read/analyzed this section of Berkowitz' talk, but I think that once again this is some shallow/hasty dismissal: "turnout wasn't high in Trump 1.0, so Trump can't be responsible for higher turnout in Trump 2.0" is not a knock-down argument in itself. It's wholly possible that Trump was so polarizing after 4 additional years of hogging the limelight that he had an upward effect on turnout. However, I do agree it's probably not the most persuasive argument--but I don't think that you ought to be so confident in your own analysis, either. Having not done serious research on the matter, it still seems more likely that the expansion of mail-in/absentee voting also had a sizable upward effect. In the end, all three of our explanations could be right. Perhaps Berkowitz should have mentioned your point, perhaps it wasn't that crucial.

In summary, I'd say it's easy to pick apart any political pundit's analysis; I imagine when I listen to this podcast I'll have a number of criticisms. However, I think it's also important to apply similar scrutiny to our own criticisms.

Comment by Harrison D on On Mike Berkowitz's 80k Podcast · 2021-04-21T02:48:01.187Z · EA · GW

Quick, minor note: I'd recommend linking to the podcast you're referring to for ease of access.

Comment by Harrison D on If Bill Gates believes all lives are equal, why is he impeding vaccine distribution? · 2021-04-21T02:42:33.719Z · EA · GW

For quite some time now (even predating Covid), I've suspected that in many situations patents are just an inferior/stopgap market tool, but that a more-nuanced prize system such as you link to would require a trustworthy, competent, and (re)trained bureaucracy. It's outside of my wheelhouse so I haven't really actively pursued it, but I personally would be interested to see more research/discussion on the subject.

That being said, I do take some issue with the title of this post, which appears to beg/load the question: one could make an argument that the decision by Bill Gates could aid vaccine distribution by providing more market incentives for production and fast distribution. I'm definitely not familiar with this, but I do suspect that it's not like the vaccine information is going to be withheld from the world while at the same time pharma companies just price gouge their way through the Global South; I think it's more likely the vaccines will be provided to developing countries via foreign aid and/or other mechanisms at far lower cost relative to what was charged among wealthy countries. In the end, there are going to be unavoidable costs associated with production and distribution; the question is whether that is covered via foreign aid directly or if it will be funded more-indirectly by the profits gained in developing countries.

Comment by Harrison D on To Build a Better Ballot: an interactive guide to alternative voting systems · 2021-04-18T15:30:40.043Z · EA · GW

If you haven’t wandered around the Nicky Case website, I’d recommend doing so. There are a lot of interesting educational games on there, covering a wide variety of concepts such as social contagion, prisoner’s dilemmas, segregation, etc.

Comment by Harrison D on Harrison D's Shortform · 2021-04-18T15:22:29.440Z · EA · GW

Do you just mean this shortform or do you mean the full post once I finish it? Either way I’d say feel free to post it! I’d love to get feedback on the idea

Comment by Harrison D on Against opposing SJ activism/cancellations · 2021-04-17T21:27:09.285Z · EA · GW

I may have just missed this in the comments below, but FWIW: On top of all the other points that have been made in opposition to this stance, I would also assign very low credence to the implied claim "if we don't [do things that oppose cancel culture], then we'll be able to avoid getting canceled during the 'cultural revolution'." I would suspect that if this "cultural revolution" (which I already consider implausible) were nearly as bad as you suggest, EA as a movement would already get targeted regardless (especially if it's the case that the whole movement will be held collectively guilty for a subset of the movement speaking out about something), and thus it would have an even smaller fractional expected value. To clarify further, here the witch analogy that you use is potentially misleading because with witch hunts the scope is at least ostensibly limited to the instances of "witches." This could of course be expanded to include "witch sympathizers", but it's at least more plausible that by avoiding getting involved one can continue their abolition work. If however the witch hunt were to grow into an entire philosophy that says "anyone not primarily concerned with finding witches and burning them will be treated as witch sympathizers," then you face the lose-lose situation (darned if you do, darned if you don't).

Comment by Harrison D on Harrison D's Shortform · 2021-04-17T02:56:29.210Z · EA · GW

EA (forum/community) and Kialo?

TL;DR: I’m curious why there is so little mention of Kialo as a potential tool for hashing out disagreements in the EA forum/community, whereas I think it would be at least worth experimenting with. I’m considering writing a post on this topic, but want to get initial thoughts (e.g., have people already considered it and decided it wouldn’t be effective, initial impressions/concerns, better alternatives to Kialo)

The forum and broader EA community has lots of competing ideas and even some direct disagreements. Will Bradshaw's recent comment about discussing cancel culture on the EA forum is just the latest example of this that I’ve seen. I’ve often felt that the use of a platform like Kialo would be a much more efficient way of recording these disagreements, since it helps to separate out individual points of contention and allow for deep back-and-forth, among many other reasons. However, when I search for “Kialo” in the search bar on the forum, I only find a few minor comments mentioning it (as opposed to posts) and they are all at least 2 years old. I think I once saw a LessWrong post downplaying the platform, but I was wondering if people here have developed similar impressions.

More to the point, I was curious to see if anyone had any initial thoughts on whether it would be worthwhile to write an article introducing Kialo and highlighting how it could be used to help hash out disagreements here/in the community? If so, do you have any initial objections/concerns that I should address? Do you know of any other alternatives that would be better options (keeping in mind that one of the major benefits of Kialo is its accessibility)?

Comment by Harrison D on EA Debate Championship & Lecture Series · 2021-04-10T18:42:07.836Z · EA · GW

"The problem is that it's evidence that the system at large has very little defenses against goodharting and runaway competition effects." Although I acknowledge that there will always be some level of misalignment between truth-seeking and competition, I would push back on the idea that the system has little defense against drastic goodharting like is seen in both high school and collegiate policy debate: the experience of Stoa (the league in which I debated) and NCFCA are evidence of that. In my view and in the view of some others (see e.g., https://www.ethosdebate.com/community-judges-1-necessity-community-judges/), it seems that one of the important front-line defenses against gamification of debate is the use of community judges who recoil at nonsense and speed. Of course, that introduces tradeoffs that debaters (myself included) sometimes huff about, such as biased decisions, but it still seems worth it. Additionally, I feel fairly confident that there are other important factors that explain the stark cultural differences between Stoa/NCFCA and most public-school/collegiate leagues (e.g., the debaters' personalities/background, parental involvement, the Christian ethos, the observation of and opportunity for self-differentiation from public-school/collegiate practices).

To address your broader point about the truth-seeking vs. competition drive (goodharting): I and many others in my league have considered this question. (For a brief example article from someone I know, see https://www.ethosdebate.com/art-persuasion-vs-pursuit-truth/) I could be wrong/exaggerating, but I get the sense from you that debate should be really strict about promoting truth-seeking above other things--perhaps even to the extent that debate should almost never sacrifice truth-seeking for other goals. Perhaps that is not what you are saying, but regardless, I would push back and emphasize that debate has a wide variety of purposes, crucially including skills education in general (as opposed to topical education). (I actually recently finished a blog series which I started by outlining some of the major purposes of debate: see https://www.ethosdebate.com/purposes-of-debate-pt-1-the-goals-and-anti-goals-of-debate/ ). In short, I think that the experience of Stoa/NCFCA shows that with reasonable safeguards (e.g., including community judges in the judging pool) debate can be at least neutral if not more positive than negative in promoting truth-seeking, while at the same time is a great way to get youth excited about studying topics, scrutinizing their own views, and learning to persuade others. That last part applies to that NITOC final (regarding seatbelt policy), which focused on a case that was known for being somewhat pathos-heavy (as opposed to, for example, the case for cutting funding for air marshals, which I and many other debaters would likely have never come to see if it were not subject to the adversarial scrutiny of a competitive season of debate): debate shouldn't be entirely/solely about truth-seeking; teaching persuasion skills is also really important, because if you have the truth but cannot persuade others, then your ability to act on it is sorely limited.

Also: "people repeatedly abusing terrible studies because you can basically never challenge the validity or methodology of a study" -- my experience in Stoa was fairly different: I repeatedly had to defend the methodology of some of the studies I relied on, and was able to challenge the methodology of sources.

Comment by Harrison D on EA Debate Championship & Lecture Series · 2021-04-10T02:13:45.773Z · EA · GW

Just to clarify my position:

  • I think that the culture of British Parliamentary has made it significantly less game-y and more civil than most if not all other formats of collegiate debate, including both prepared formats (e.g., Policy Debate, Lincoln-Douglas, Public Forum) and limited preparation formats (e.g., other forms of parli such as American Parliamentary).
  • I think that the limited topic prep nature of parliamentary debate makes those formats significantly less game-y and more civil than most other formats of collegiate debate.
  • My main issue with BP is really just two individual characteristics in the format that represent stark differences from the format I did in high school (American Parli, in the Stoa league): the 4-teams-of-2 (instead of 2v2) combined with the lack of access to published sources on the internet when in prep. Really, most if not all of the main issues I highlighted in my comment relate to the first thing, which I think is more fundamental. So ultimately, I'm not trying to compare BP to policy debate, nor am I trying to compare it to the actual (culturally-driven) practice of American Parli in collegiate leagues (which I'm not as familiar with), it's really just me comparing it to what I think an ideal format would be when given a decent culture that isn't so acceptive of gamification.
Comment by Harrison D on EA Debate Championship & Lecture Series · 2021-04-10T01:41:28.688Z · EA · GW

It's unfortunate to hear that you had such a negative experience from debate. As someone who has judged public-school high school policy and public forum debate, I will say that I am not that surprised. That being said, I do take issue with your characterization of all/BP debate with the video plus the statement "I do think British Parliamentary Debate style is a bit less broken than this, but like, not that much."

I cannot speak to every BP league/competition in the world, but I have never seen nor heard of such drastic gamification of debate in BP--or anything even come close to it--in the four years that I did BP in college. In fact, I have often seen people hold up BP and collegiate policy debate as polar opposites, with BP being one of the least toxic/gamified formats (at least among the major formats) and policy debate being the most. (BP definitely has some problems with left-leaning judge bias, but it could be a lot worse and that's not really that unique to BP.) Ultimately, I don't want to be rude/abrasive, but I feel that the video really gives a deeply misleading picture of BP, even if it is only a mild-moderate exaggeration of collegiate policy debate. I think it's unfortunate to think that many people who are unfamiliar with debate (let alone BP specifically) may come away with such a misleading picture of BP/debate in general based on this extreme example of a different format. I'm not sure how to say this in a non-confrontational way, but I personally think that some kind of revision/redaction (e.g., a disclaimer saying that the video is not of BP, acknowledgement that BP is different) may be in order.

I will just add the following video to illustrate that the gamifying (e.g., speed and spread tactics) of debate is not so inherent to the sport or even specific formats themselves, but rather are heavily determined by league culture (e.g., what kinds of judges are used, how debaters are taught to debate): https://www.youtube.com/watch?v=TvhNvumnZ1U&t=23s (Although, do excuse the pathos-heavy story at the beginning, and remember these are just high school students)

Comment by Harrison D on EA Debate Championship & Lecture Series · 2021-04-10T01:04:08.157Z · EA · GW

Sounds good! Like I said, I do recognize that choosing BP probably has quite a few advantages on the (meta?) level in the sense that it seemingly has a more-global audience and topic scope, perhaps a better competitive culture, etc. (Update/clarification: I would say that all of the "flaws" with BP's format are minor in comparison with the advantages from the BP league culture, which crucially does not have the ridiculous speed and spread from policy debate, as exhibited in the video from Habryka.) If I ever get around to finishing that article about its downsides I might share a link to it here...

Comment by Harrison D on Getting a feel for changes of karma and controversy in the EA Forum over time · 2021-04-07T22:46:37.876Z · EA · GW

First of all, thanks for posting this, I think it's interesting to see some analysis on this topic I actually just yesterday thought about when I looked at an old post. I don't know if you already tried looking at this and/or whether it is even possible to do this, but I think an interesting metric would be something like "number of people who upvoted (or downvoted?) divided by number of unique people who have viewed the article". I doubt that would perfectly fix the "old posts' votes are underrepresented" (if, for example, there are any kind of chronological snobbery or "old news = boring news" biases).
Is it possible to see how many unique users have viewed an article?

Comment by Harrison D on Meta-EA Needs Models · 2021-04-06T21:57:46.993Z · EA · GW

Sometimes, questions are too difficult to answer directly. However, if you’re unable to answer a question, then a sign that you’ve understood the question is your ability to break it down into concrete subquestions that can be answered, each of which is easier to answer than the original top-level question. If you can’t do this, then you’re just thinking in circles.

I am actually working on a post that provides an adaptable framework for decision-making which tries to do this. That being said, I naturally make no guarantees that it will be a panacea (and in fact if there are any meta-EA-specific models being used, I would assume that the framework I'm presenting will be less well tailored to meta-EA specifically).

Comment by Harrison D on EA Debate Championship & Lecture Series · 2021-04-05T18:56:22.462Z · EA · GW

As someone who did debate in high school and throughout college, I am really excited to see this + I think it makes a lot of sense. As you noted, debate often involves evaluating choices in more-neutral ways, seeing both sides of arguments, etc. I'd love to hear more about how this project/idea develops.

The only thing I would note is my moderate dislike for the British Parliamentary (BP) format. Of course, I recognize that it may not be feasible to choose a different format and/or that there may be other justifications for using it (e.g., having more people per round, the league's culture is not as wacky/out-of-touch as some other leagues', a greater breadth of perspectives in each round). 
Still, in my experience/analysis, BP's 4-teams-of-2 format (instead of the traditional "one team vs. one team" format), wherein teams that are ostensibly supposed to be  working together to support their side of the motion are actually partially pitted against each other to get a higher rank in a round, leads to numerous problems that undermine the educational value of the round: knifing* (where one of the "back half" teams undercuts something that the "opening" team on their  own side said), abandoning (where one of the back half teams lets the other side strawman or otherwise unfairly attack their opening team's arguments), the fact that closing government (back-half team for the motion) can really suffer if opening government sets up the round poorly (e.g., when opening government uses really bad definitions), the fact that closing teams are often incentivized to focus on "new" arguments rather than focusing on the "good" arguments (since those will usually already have been taken by the opening teams), etc. 
(Honestly, this is just a few of the highlights: for a few months off-and-on I've been outlining a blog article on why I dislike certain aspects of BP. Who knows, maybe I'll finish it sometime this month?)

*Although hard knifing is rarely an effective strategy (usually, judges aren't blind to what's going on and they'll punish the knifer if it was bad/uncalled for), it's maddening how effective soft abandonments are (e.g., only giving half responses then saying something like "we want to focus on new matter on back half").

Comment by Harrison D on Mundane trouble with EV / utility · 2021-04-05T18:19:32.943Z · EA · GW

Actually, I think it's worth being a bit more careful about treating low-likelihood outcomes as irrelevant simply because you aren't able to attempt to get that outcome more often: your intuition might be right, but you would likely be wrong in then concluding "expected utility/value theory is bunk." Rather than throw out EV, you should figure out whether your intuition is recognizing something that your EV model is ignoring, and if so, figure out what that is. I listed a few example points above, to give another illustration:
Suppose you have a case where you have the chance to push button X or button Y once: if you push button X, there is a 1/10,000 chance that you will save 10,000,000 people from certain death (but a 9,999/10,000 chance that they will all still die); if you push button Y there is a 100% chance that 1 person will be saved (but 9,999,999 people will die). There are definitely some selfish reasons to choose button Y (e.g., you won't feel guilty like if you pressed button X and everyone still died), and there may also be some aspect of non-linearity in the impact of how many people are dying (refer back to (1) in my original answer). However, if we assume away those other details (e.g., you won't feel guilty, the deaths to utility loss is relatively linear) -- if we just assume the situation is "press button X for a 1/10,000 chance of 10,000,000 utils; press button Y for a 100% chance of 1 util" the answer is perhaps counterintuitive but still reasonable: without having a crystal ball that perfectly tells the future, the optimal strategy is to  press button X.

Comment by Harrison D on Spears & Budolfson, 'Repugnant conclusions' · 2021-04-04T21:08:35.369Z · EA · GW

It seems like I was not able to access it (without paying) if you are referring to https://link.springer.com/article/10.1007/s00355-021-01321-2 

Comment by Harrison D on "Hinge of History" Refuted (April Fools' Day) · 2021-04-04T03:47:29.694Z · EA · GW

Disappointed to see that someone decided this was a joke and changed the article title :/

Comment by Harrison D on Mundane trouble with EV / utility · 2021-04-03T19:48:30.272Z · EA · GW

I won't try to answer your three numbered points since they are more than a bit outside my wheelhouse + other people have already started to address them, but I will mention a few things about your preface to that (e.g., Pascal's mugging).
I was a bit surprised to not see a mention of the so-called Petersburg Paradox, since that posed the most longstanding challenge to my understanding of expected value. The major takeaways I've had for dealing with both the Petersburg Paradox and Pascal's mugging (more specifically, "why is it that this supposedly accurate decision theory rule seems to lead me to make a clearly bad decision?") are somewhat-interrelated and are as follows:
1. Non-linear valuation/utility: money should not be assumed to linearly translate to utility, meaning that as your numerical winnings reach massive numbers you typically will see massive drops in marginal utility. This by itself should mostly address the issue with the lottery choice you mentioned: the "expected payoff/winnings" (in currency terms) is almost meaningless because it totally fails to reflect the expected value, which is probably miniscule/negative since getting $100 trillion likely does not make you that much happier than getting $1 trillion (for numerical illustration, let's suppose 1000 utils vs. 995u), which itself likely is only slightly better than winning $100 billion (say, 950u) ... and so on whereas it costs you 40 years if you don't win (let's suppose that's like -100u).
2. Bounded bankrolling: with things like the Petersburg Paradox, my understanding is that the longer you play, the higher your average payoff tends to be. However, that might still be -$99 by the time you go bankrupt and literally starve to death, after which point you no longer can play.
3. Bounded payoff: in reality, you would expect that payoffs to be limited to some reasonable, finite amount. If we suppose that they are for whatever reason not limited, then that essentially "breaks the barrier" for other outcomes, which are the next point:
4. Countervailing cases: This is really crucial for bringing things together, yet I feel like it is consistently underappreciated. Take for example classic Pascal's mugging-type situations, like "A strange-looking man in a suit walks up to you and says that he will warp up to his spaceship and detonate a super-mega nuke that will eradicate all life on earth if and only if you do not give him $50 (which you have in your wallet), but he will give you $3^^^3 tomorrow if and only if you give him $50." We could technically/formally suppose the chance he is being honest is nonzero (e.g., 0.0000000001%), but still abide by rational expectation theory if you suppose that there are indistinguishably likely cases that cause the opposite expected value -- for example, the possibility that he is telling you the exact opposite of what he will do if you give him the money (for comparison, see the philosopher God response to Pascal's wager), or the possibility that the "true" mega-punisher/rewarder is actually just a block down the street and if you give your money to this random lunatic you won't have the $50 to give to the true one (for comparison, see the "other religions" response to the narrow/Christianity-specific Pascal's wager). Ultimately, this is the concept of fighting (imaginary) fire with (imaginary) fire, occasionally shows up in realms like competitive policy debate (where people make absurd arguments about how some random policy may lead to extinction), and is a major reason why I have a probability-trimming heuristic for these kinds of situations/hypotheticals.

Comment by Harrison D on Any EAs familiar with Partha Dasgupta's work? · 2021-04-01T22:00:42.374Z · EA · GW

I think there may be a bit of a disconnect between what I meant and how it was received, perhaps magnified by the fact that I was only giving my skim-derived impressions. First, I fully agree with jackmalde's point that GDP isn't a perfect measure, but partially reflecting a comment from your second paragraph, I presume that a lot of economists recognize that measures like GDP are not perfect (in fact, at least 2 if not all 3 of the econ professors I've had have explicitly said something along those lines). 
Second, based on the first paragraph of the Cambridge article ("Nature is a “blind spot” in economics") it seemed like the implication was that 1) economists have massively ignored this, and 2) adding consideration of "nature" would be model-shattering. When the claim is simply "nature is a factor" (among multiple others), I think that's probably reasonable.
Third, I should clarify what I mean about my skepticism: I am not the slightest bit skeptical that economic models could be improved in general. However, by default I am skeptical towards any specific claim of widespread blindness among economists, because I think that most of these claims will be wrong -- i.e., I have a low outside view/base rate for each specific claim, especially with regards to the questions I mentioned in my original answer/comment. 
Building on that, I don't want to over-articulate my thought process since it was largely just my initial, informal thoughts, but: There may be good evidence to back up Partha's claim, it just seems like something that falls within a category of "Things that, if true, would be much more widely recognized [by economists] / would not have to be presented as some major 'blind spot.'" I don't claim that this heuristic is good for someone whose work/research relates to this (i.e., those kinds of people should do more research than initial impressions), but as someone who is not in economics I think it's more effective to have that kind of skepticism as opposed to treating every economic idea of the day/hour as equally legitimate.
Lastly, I'll admit that I may have been judging it a bit too hastily as a result of its similarity to some of the discourse I've seen from nature-as-an-inherent-value environmentalists. If he is trying to put forward a way to measure the (extrinsic) impact of ecosystems on human wellbeing in a way not measured by other standards of wellbeing (e.g., pollution's effects on health indicators, timber's and fish stocks' ability to provide consumption value, insect pollination's effects on agricultural productivity), that might be interesting, it's just that a lot of the initial examples presented felt like they could have been examples of double counting (see previous parenthetical). This is an important point that helps tie together some of the previous issues.

Comment by Harrison D on "Hinge of History" Refuted (April Fools' Day) · 2021-04-01T21:00:18.785Z · EA · GW

Sometimes I can understand arch-primitivists' argument for returning to the period of open doors supported by nothing but classic material physics and gravity, but every time I look at a modern door hinge I am reminded that it is one of the few things that decently rhymes with orange.

Comment by Harrison D on Any EAs familiar with Partha Dasgupta's work? · 2021-03-31T03:15:55.389Z · EA · GW

Very light, initial impression:
The EA community is at least somewhat intellectually diverse, and on this particular topic I think there are probably some people in the EA community who may be quite sympathetic to the idea. I'll add the important caveat, though, that I merely skimmed the abstracts/introductions for those links, so I don't know exactly what all he argues for. If he is simply saying "nature is an important factor in health, economic inputs/resources, leisure, etc." then that does not sound so model-shattering. Still, I am a bit skeptical of any kind of "Here's this one thing [especially something associated with lots of sentiment/political buzz, like "nature"] that economists have inexplicably left out of their models, and it changes everything"--e.g., skeptical of its significance in general, skeptical that economists have truly left it out of their models if it is significant, skeptical that there isn't a valid reason to leave it out of their models if they have been doing that and it is significant, and so on.

Comment by Harrison D on Julia Galef and Matt Yglesias on bioethics and "ethics expertise" · 2021-03-30T03:38:10.369Z · EA · GW

Dang, now I am really interested in listening to that podcast.

Comment by Harrison D on Is laziness immoral? · 2021-03-30T03:18:56.012Z · EA · GW

A couple of likely-imperfectly-organized thoughts:
I also have gone through relatively similar experiences where I felt like if I knew what was optimal, I felt like it was wrong for me to not do it. However, I want to really echo what some of the other people said in the answers by also saying that I came to recognize that as humans, we face an is-ought fallacy with our psychology/physiology: we can think "I shouldn't get tired of doing good, I ought to not get tired of self-sacrifice, etc." but we can't perfectly control our bodies'/brains' chemical and electrical processes: at some point, if you push yourself too hard physically or berate yourself too much psychologically/mentally, you will likely break/suffer under stress (which will undermine your ability to and/or likelihood of doing good in the long run). I can't tell you what the perfect balance is, and I know I struggle with overcoming my akrasia/I should be doing more in certain ways, but I also think that it's better to err on the side of "not mentally snapping". You can try to push your mental limits, but you also have to be seriously and truly honest with yourself in answering questions (not just at a single point in time) like "Will I actually be able to keep up this ascetic-altruistic lifestyle for the next few years?" Personally, I can -- and I try -- to do better, but I am confident that I can comfortably keep up the lifestyle and donation % I am planning for the coming years.

Comment by Harrison D on Is EA just about population growth? · 2021-01-17T18:43:57.874Z · EA · GW

Correct me if I'm wrong, but it seems you are ultimately arguing that "life" (whether that is measured in population, QALYs, or something else) is not the only "goal of life," correct? If by "goal of life," you are referring to a concept like morality/goodness/utility, then I think I would totally agree that population/QALYs are not the only relevant measure, and I imagine that a lot of other people would similarly agree. 

Where people do disagree is what all else counts, and what things weigh more than others. Broadly speaking, people often refer to utility as a theoretical "all-encompassing" metric of goodness/wellbeing, oftentimes referring to the (slightly) less-theoretical concept of "happiness" (e.g., pleasure vs. pain). I must admit that I'm not deeply intellectually familiar/concerned with some of the arguments over different ways to approach/interpret utility (e.g., preferential utilitarianism vs. hedonistic utilitarianism), nor do I have a strong stance on average utilitarianism vs. aggregate utilitarianism (again due mainly to a lack of perceived importance for my decision-making to choose one over the other), but I want to highlight these as concepts/debates to further explore.

To address the specific example of "woman with a good career" vs. "having more children": first, I was a bit confused by the part that says to compare the woman having a career to "saving three lives from death"; it seems like you just meant "causing three lives to exist when they would not have," correct? (There's a big difference there at least under average utilitarianism). Second, one of the reasons that "maximize the population" is not intuitively/necessarily moral is because that does not account for problems from overpopulation, including increased suffering on others who do exist. Additionally, a woman with a career might be able to save more lives by donating income to effective charities, thus increasing life by not directly having children.

Comment by Harrison D on Notes: Stubble Burning in India · 2021-01-15T03:38:58.922Z · EA · GW

Again, I'm not super knowledgeable of the situation and/or the proposals, but to draw on a bit of economist/libertarian thought by cross-applying concepts from other, similar situations (e.g., pollution externalities): I would be hesitant to describe many of the (likely-impactful) solutions as truly "win-win." Proposals 1 and 2 clearly (and sort of/potentially proposals 3 and 5) are subsidies that help farmers at the expense of everyone else. Yes, it may be the case that the "city folk" would benefit from less pollution, but they would have to bear a (likely heavy) portion of the tax burden to fund that--all to stop pollution which is imposing non-consensual, uncompensated harms on them in the first place. So, it might be true if proposal 1 works at reducing pollution it's a "better-better" situation than doing nothing, but (to put it dramatically) that's vaguely akin to saying "paying off the mafia for protection is a win-win, since the mafia makes money and they don't smash stores." 

Toning it down a bit: that's not to say they are necessarily bad proposals, or that such solutions (even when less than ideal) are not the best politically feasible options. But I am slightly curious to see more evidence about the market dynamics of the situation: if political feasibility were not a limitation, what would be the optimal response? Starting with the simple, econ 101/102 approach: If stubble burning is really so bad for the farmers, it begs the question why they don't just cease the practice on their own. It seems the obvious answer is "because those benefits are still less than the cost of not burning"; as a matter of 101/102-level (i.e., simplistic) economics, the response to that should be "raise prices and either compensate people for the damage you impose on them through pollution or stop doing the pollution." This I think is where market failures probably step onto the stage to wrinkle things... but I only have a narrow slice of experience with ag policy, and it isn't this topic, so I'll just leave my rambling at that.

Comment by Harrison D on Notes: Stubble Burning in India · 2021-01-15T01:19:11.380Z · EA · GW

I know almost nothing about this (aside from having heard programs mention the problem, and now having read this and a few sources on it), but I'll just semi-casually remark that it seems like a major/central problem here is that a negative externality is not priced into the market whereas if it were priced in via taxation (perhaps to partially fund a rebate similar to the first proposal mentioned) the problem would be largely fixed--but that political resistance prevents such a standard economic policy response, to the net detriment of the country/people. Thus, a lot of the solutions seem to be about how to find ways around that political resistance, even if not so explicitly.
But I could be wrong, and that's where I wanted to ask/clarify: are there also substantial problems with enforcement/verification? Are there other market failures (e.g., a bloated supply with poor people that would struggle deal with frictional unemployment and/or who are not really skilled enough to get work elsewhere)? 

Ultimately, I just wonder if the most efficient solution would be primarily just "tax and (partial) rebate"; if the barrier to that is "farmers are a major voting block and thus can impose costs on others", is it not possible to just find a political compromise (e.g., are there any similar situations where non-farmers impose externalities on farmers)?

(I'm absolutely not knowledgeable on Indian politics, I'm just curious/thinking)

Comment by Harrison D on Is foreign aid effective? · 2021-01-14T01:13:29.734Z · EA · GW

In my all-knowing, expert opinion [based on having taken 1 undergraduate class on international development], I would say this seems like a fairly good review. In all seriousness, I felt like it does a good job of not just saying "well, there are arguments for and against; the evidence is mixed; maybe *shrug*". I might be a bit biased since I mostly agreed with the conclusion going into this,  but I do like that you go a bit deeper by talking about challenges/pitfalls with conducting and interpreting empirical research, as well as how you not only highlight that dichotomies (aid bad vs. aid good) are inaccurate but also provide examples of specific lessons that can be learned/applied.

Comment by Harrison D on The funnel or the individual: Two approaches to understanding EA engagement · 2021-01-13T18:28:23.929Z · EA · GW

I think this touches on some good points, such as the "willingness to coordinate" being influenced by motivation/perceived value in coordination. I am a bit confused/unclear about what you mean by "suitable opportunity structure" and/or how it relates to action alignment; does it refer to ideas/questions like "do the opportunities/platforms/networks that are necessary for coordination exist (such as Slack, narrow-topic groups, etc.)?" (It's probably clearer in the context of the larger post/writing, I just wasn't 100% sure here.)

More broadly, does this model employ a community-centered decision approach along the lines of "1) Does the community want to coordinate; 2) Is the community able to coordinate?" I mainly ask for clarification but also because it vaguely reminded me of a simplified rational-actor-centric decision model I know/like, which basically focuses on three main factors: beliefs, values/preferences/goals, and options/capabilities. Would I be correct in thinking that "beliefs" is similar to 1b, "values" is similar to 1a, and "options" is similar to 2?

The other question/comment I had was with regard to 1c. When trying to figure out "why don't people want to coordinate," I think that's a good point to include in a shortlist of questions to ask for troubleshooting. If I were to go a bit deeper, though, and look at it on a semi-rational-actor choice level (as I like to do), I think 1c strikes on / could be expressed as an alternate motive for coordination: "to what extent do people enjoy coordination for the process/journey (e.g., socializing with others, performing/affirming my values) as opposed to just the outcome/destination (i.e., success)?" --The contrast being that 1a/1b are more focused on "what is the outcome: how likely is it and how much do I value it?" In contrast, I think one key factor/dampener for coordination (at least on the individual-choice level) are the drawbacks in terms of opportunity cost, stress, financial or other resources (perhaps), etc. Thus, I was wondering if you were planning to include such "coordination costs" as part of the model?

Comment by Harrison D on The funnel or the individual: Two approaches to understanding EA engagement · 2021-01-13T02:15:27.344Z · EA · GW

Thanks for writing this; I found it interesting (especially with the diagrams)!

I can't seem to find a comment/message I wrote some time ago (perhaps on the EA Organizers Slack), where I talked about the two-pyramid model, which emphasizes the potential disconnect between someone's beliefs and actions (and that stronger beliefs/actions tend to be less common). I wanted to bring it up so as to expand on it and apply it to the discussion here.

Maybe this is overly simplistic, but it seems that two of the most important goals/targets of community leaders/organizers tends to be "persuading and supporting people to engage in more-effective actions" and "growing the community (mainly to indirectly support the first goal)." Given that, and partially  in building off of/in regards to the individual model, I was wondering if you have considered some kind of model that emphasizes the shifts in an individual's different characteristics in relation to EA/the EA community?

For example, one characteristic could be "belief in/alignment with EA principles": to what extent does a person believe the research/arguments regarding cause prioritization and/or the ability to reasonably estimate impact? This could importantly be different from a characteristic like "Action alignment with EA principles": to what extent does a person actually act on EA principles (e.g., donating to effective charities, pursuing high-impact careers)? This could also be different from something like "level of engagement/interaction with the EA community," such as "to what extent do they attend events, etc.?" I make that distinction particularly because I have organized a group where some people would attend somewhat regularly largely for the social-intellectual atmosphere (and, probably, the free pizza) but did not seem to really express noteworthy changes in belief or enthusiasm for action. Additionally, one could assess a characteristic like "action to support the community": to what extent do they help recruit others, support events in financial/facility/planning/execution/etc. terms, and so on. 

It seems that all of these characteristics can be present to varying extents: there are probably some meaningful correlations between the first two (EA belief and EA action), but they may not always align. Additionally, someone may have EA beliefs and take EA action (with regards to career and philanthropic choices), but not be very involved in terms of the latter two characteristics (community engagement and community support)--and further research may find this to be related to relevant trends like value drift, etc. You might also just have some people similar to what I had: low interest in EA beliefs or actions, but some engagement. And so on.

Ultimately, I definitely can see this as being more complex/noisy, but I think it could potentially be a helpful "background/advanced tool" to have in conjunction with an easier model like the one you describe. Of course, I'm not engaged in the community organization literature (so I hope this post isn't totally duplicative or missing the point), but I would be interested to hear your thoughts!

Comment by Harrison D on The Electoral Consequences of Pandemic Failure Project · 2021-01-09T03:12:05.950Z · EA · GW

This does sound somewhat interesting; I would hope that Congress conducts some kind of post-mortem, although I imagine it would probably have a lot of political bias problems. When I read over this, I generally agreed that such a thing would be nice, but two questions/concerns came to mind which perhaps you could address: 

  1. It seems like it may be rather difficult to objectively determine the extent/effect of certain factors, given that the ratio of N (countries) to relevant control variables (e.g., cultural norms, urbanization/density, levels of government, respect for human rights, economic performance and characteristics) seems really small. I'm not saying a decent analysis/report can't be done, I just think that it will be much harder to be more confident of the findings--and to make the findings persuasive, which plays into the second concern:
  2. I'm worried this kind of organization might do some preaching to the choir, but otherwise struggle to persuade the most important target audience (i.e., people who voted for bad politicians) to actually change their opinions/beliefs, let alone their voting habits (at least in the highly-polarized United States).

As a side note (and as part of the reason why I was particularly interested in reading this), I have long wanted/dreamed of some kind of decently impartial "performance/character evaluation" organization that would rate politicians along certain metrics (e.g., do they lie a lot, do they consult experts), perhaps similar to something like accreditation (or "GiveWell but for politics: VoteWell"). (I know of various scorecard organizations/projects, but I think all the ones I've seen are narrowly focused on a policy area and/or are heavily politically biased.) The underlying reasoning would be something like "it's far more efficient to sample test the organization's analysis and then rely on their credibility when voting than it is to individually evaluate every person you are thinking of voting for." Of course, such a grand project (of the type I'm describing) seems like a total pie in the sky. :/

Comment by Harrison D on The Folly of "EAs Should" · 2021-01-08T18:09:18.584Z · EA · GW

I think it’s helpful to just put aside the “EA Budget” thread for a moment; I think what Halstead was trying to get at is the idea/argument “If you are trying to maximize the amount of good you do (e.g., from a utilitarian perspective), that will (almost) never involve (substantive) donations to your local opera house, pet shelter, ...” I think this is a pretty defensible claim. The thing is, nobody is a perfect utilitarian; trying to actually maximize good is very demanding, so a lot of people do it within limits. This might relate to the concept of leisure, stress relief, personal enjoyment, etc. which is a complicated subject: perhaps someone could make an argument that having a few local/ineffective donations like you describe is optimal in the long term because it makes you happier with your lifestyle and thus more likely to continue focusing on EA causes... etc. But “the EA (utilitarian) choice” would very rarely actually be to donate to the local opera house, etc.

Comment by Harrison D on Harrison D's Shortform · 2020-12-06T19:01:52.085Z · EA · GW

Thanks for the insight/feedback! I definitely see what you are saying on a lot of points. I’ll be working on an improved post soon that incorporates your feedback.

Comment by Harrison D on Harrison D's Shortform · 2020-12-05T04:10:16.313Z · EA · GW

A few months ago I wrote a post on a decision-analysis framework (the stock issues framework) that I adapted from a framework which is very popular/prominent in competitive high school policy debate (which uses the same name). I was surprised to not receive any feedback/comments (I was at least expecting some criticism, confusion, etc.), but in retrospect I realized that it was probably a rather lengthy/inefficient post. I also realized that I probably should have written a shortform post to get a sense of interest, some preliminary thoughts on the validity and novelty/neglectedness of the concept, and how/where people might misinterpret or challenge the concept (or otherwise want to see more clarity/justification). So, I’ll try to offer a simplified summary here in hopes to get some more insight on some of those things I mentioned (e.g., the potential value, novelty/neglectedness, validity, areas of confusion/skepticism).

The framework remarkably echoes the “importance, neglectedness, tractability” (INT) heuristic for cause area prioritization, except that the stock issues framework is specific to individual decisions and avoids some of the problems of the INT heuristic (e.g., the overgeneralized assumption of diminishing marginal returns). Basically, the stock issues framework holds that every advantage and disadvantage (“pro and con”) of a decision rests on four mutually exclusive and exhaustive concepts: inherency (which is reminiscent of “neglectedness,” but is more just “the descriptive state of affairs”), significance, feasibility, and solvency. (I explain them in more detail in my post.) 

Over time, I have informally thought of and jotted down some of the potential justifications for promoting this framework (e.g., checking against confirmation and other biases, providing common language and concept awareness in discourse, constructing concept categories so as to improve learning and application of lessons from similar cases). However, before I write a post about such justifications, I figured I would write this shortform to get some preliminary feedback, as I mentioned: I’d love to hear where you are skeptical, confused, interested, etc.! (Also, if you think the original post I made should/could be improved--such as by reducing caveats/parentheticals/specificities, making some explanation more clear, etc.--feel free to let me know!)

Comment by Harrison D on Timeline Utilitarianism · 2020-10-10T07:02:39.412Z · EA · GW

(In light/practice of advice I've read to just go ahead and comment without always trying to write something super substantive/eloquent, I'll say that) I'm definitely interested in this idea and in evaluating it further, especially since I'm not sure I really thought about this in an explicit way before (since I generally just think "average per each person/entity's aggregate [over time] vs. sum aggregate of all entities," without focusing that much on a distinction between an entity's aggregate over time and that same entity's average over time). Such an approach might have particular relevance under models that take a less unitary/consistent view of human consciousness. I'll have to leave this open and come back to it with a fresh/rested mind, but for now I think it's worth an upvote for at least making me recognize that I may not have considered a question like this before.

Comment by Harrison D on Sortition Model of Moral Uncertainty · 2020-10-06T02:40:47.948Z · EA · GW

I think you highlight some potentially good pros for this approach and I can't say I've thoroughly analyzed this approach. However, quite a few of those pros seem non-unique to this particular model of moral uncertainty vs. other frameworks that acknowledge uncertainty and try to weigh the significance of the scenarios against each other. For example, such models already have the pros related to "It stops a moral theory from dominating...," "it makes you less fanatical," etc. (but there are some seemingly unique "pros," such as "It has no need for intertheoretic comparisons of value").

Still, I am highly skeptical of such a model even in comparison to just simply "going with whatever you are most confident in" because of things like complexity (among other things). More importantly, I think this model has a few serious problems along the lines of failing to weight the significance of the situation and thus wouldn't perform well under basic expected value tests (which you might have been getting at with your point about choosing theories with low "stake"): suppose your credences are 50% average utilitarian, 50% total utilitarian. You are presented with a situation where choice A mildly improves average utility such as by severely restricting some population's growth rate (imagine it's for animals)--but this is drastically bad from a total utilitarian viewpoint in comparison to choice B (do nothing / allow the population to rise). To use simple numbers, we could be talking about choice A = +5,-100 (utility points under "average, total"), vs. choice B = 0,0. If the decisionmaker is operating on average utilitarianism, it would be drastically bad. This is why (to my understanding), when your educated intuition says you have the time, knowledge, etc. to do some beneficial analysis, you should try to weight and compare the significance of the situations under different moral frameworks.

Comment by Harrison D on Denise_Melchin's Shortform · 2020-10-03T18:54:06.957Z · EA · GW

Perhaps comments/posts should have more than just one "like or dislike" metric? For example, it could have upvoting or downvoting in categories of "significant/interesting," "accurate," "novel," etc. It also need not eliminate the simple voting metric if you prefer that.

(People may have already discussed this somewhere else, but I figured why not comment--especially on a post that asks if we should engage more?)

Comment by Harrison D on Factors other than ITN? · 2020-10-03T08:44:32.556Z · EA · GW

I'm not sure if it directly answers your question, but this question did finally lead me to write the post about the stock issues framework (which seems to be listed in the pingbacks). I hope that is relevant to your question!

Comment by Harrison D on A Toy Model of Hingeyness · 2020-09-12T21:23:07.137Z · EA · GW

I think those changes help clarify things! I just didn't quite understand your intent with the original wording/heading. I think it is a good idea to try to highlight the potential different definitions for the concept, as well as issues with those definitions.

Comment by Harrison D on A Toy Model of Hingeyness · 2020-09-10T18:36:38.112Z · EA · GW

(Edit 2/note: the OP's edits in response to this comment render this comment fairly irrelevant except as a more detailed explanation for why defining hingeyness in terms of total possible range (see: "2. Older decisions are hingier?") doesn't seem to make much sense/be very useful as a concept)

Apologies in advance if I'm misunderstanding your point; I've never analyzed "hingeyness" much, and so I'm not trying to advance a theory or necessarily contest your overall argument. However, one thing you said doesn't sit well with me--namely, the part where you argue that older decisions are necessarily hingier, and that is part of why you think the definition regarding the "Hinge of History" is not very helpful. I can think of lots of situations, both real and hypothetical, where a decision at time X (say, "year 1980" or "turn 1") has much less effect on both direct utility and future choices than a decision or set of decisions at time Y (say, "year 1999" or "turn 5"), in part because decision X may have (almost) no effect on the choices/options much later (e.g., it does not affect which options are available, it does not affect what effect the options have).

Take for hypothetical example a game where you are in a room with four computers, each labeled by a number (1-4). At the start of the game (point 1), only computer 1 is usable, but you can choose option 1a or option 1b. The following specifics don't matter much for the argument I'm making, but suppose 1a produces +5 utility and turns on computer 2, and option 1b produces +3 utility and turns on computer 3. (Suppose computer 2 and computer 3 have options with utility in the ranges of +1 to +10.) However, regardless of what you do at point 1--whether you press either 1a or 1b--computer 4 also turns on. This is point 2 in the game. On computer 4, you have option 4a which produces -976,000 utility, and option 4b produces +865,000 utility. And then the game ends.

This paragraph is unnecessary if you understand the previous paragraph, but for a more real-world example, I would point to (original) Quiplash: although not as drastic as the hypothetical above, my family and I would often complain that the game was a bit unbalanced/frustrating due to how your performance/success really hinged on second phase of the game. The game has three phases, but the points in phase 2 are worth double those in phase 1, and (if I remember correctly) it was similarly much more important than phase 3. Your performance in phase 1 would not really/necessarily affect how well you did in later phases (with unimportant exceptions such as recurring jokes/figuring out what the audience likes).

I recognize that "*technically*" you may be able to represent such situations game-tree-theoretically by including it as a timeline with every possible permutation, but I would argue that doing so loses much of the theoretical idea(s) that the conceptualization of hingeyness (if not also some game theory models) ought to address: that some decisions' availability and significance are relatively independent of other decisions. My choices at time "late lunch today" between eating a sandwich and a bowl of soup could technically be put on the same decision tree as my choices at time "(a few months from now)" between applying to grad school or applying to an internship, but I feel that the latter time should be recognized as more "Hingey."

Edit 1: I do think that you begin to get at this issue/idea when you go into point 3, about decreases in range, I just still take issue with statements like "Older decisions are hingier." If you were just posing it as a claim to challenge/test (and decided that it was incorrect/that it means we should define hingeyness in that way), I may have just misinterpreted as a claim or a conceptualization of hingeyness that you were trying to argue for.