Posts

US Policy Careers Speaker Series - Summer 2021 2021-06-18T20:01:21.338Z
[Short Version] What Helped the Voiceless? Historical Case Studies 2020-12-15T03:40:36.628Z
What Helped the Voiceless? Historical Case Studies 2020-10-11T03:38:57.632Z

Comments

Comment by Mauricio on Re. Longtermism: A response to the EA forum (part 2) · 2021-04-04T23:23:11.247Z · EA · GW

Sorry for my delay, and thank you for posting this!

I have two main doubts:

  1. Longtermism, at least as I understand it, doesn't actually depend on the credence assumption.
  2. None of the proposed alternatives to expected value maximization fill its important role: providing a criteria of rightness for decision making under uncertainty.

 

  1. Longtermism, at least as I understand it, doesn't actually depend on the credence assumption.

This piece focuses on criticizing the credence assumption. You make compelling arguments against it. But I don't care that much for the credence assumption (at least not since some of our earlier conversations)--I'm much more concerned with the claim that our willingness to make bets should follow the laws of probability. In other words, I’m much more concerned with decision making than with the nature of belief. Hopefully this doesn't look too goalpost-shifty, since I've tried to emphasize my concern with decision theory/bets since our first discussions on this topic.

You make a good case that we can just reject the credence assumption, when it comes to beliefs. But I think you'll have a much harder time rejecting an analogous assumption that comes up in the context of bets:

  • Willingness to bet (i.e. the maximum ratio of potential gains to potential losses that one is willing to accept) should be a real-valued function.
    • More on this below, when we get to two-valued representations of uncertainty.

We can have expected value theory without Bayesian epistemology (i.e. we can see maximizing expected value as a feature of good decisions, without being committed to the claim that the probabilistic weights involved are our beliefs. Admittedly, this makes expected value not a great name). So refuting the psychological aspect of Bayesian epistemology doesn't refute expected value theory, which longtermism (as I understand it) does depend on.

 

2. None of the proposed alternatives to expected value maximization fill its important role: providing a criteria of rightness for decision making under uncertainty.

Maybe I should be more clear about what kind of alternative I'm looking for. Apologies for any past ambiguous/misleading communication from my end about this--my thinking has changed and gotten more precise. 

A distinction that seems very useful here is the distinction between criteria of rightness and decision procedures. In short, perhaps as a refresher:

  • A criteria of rightness is a standard that an action/decision must meet to be good.
  • A decision procedure is an algorithm (perhaps a fuzzily defined one) for making decisions.

Why are criteria of rightness useful? Because they are grounds from which we can evaluate (criticize!) decision procedures, and thus figure out which decision procedures are useful in what circumstances. 

A useful analogy might be to mountain-climbing (not that I know anything about mountains). A good criteria for success might be the altitude you've reached, but that doesn't mean that "seek higher altitudes" is a very useful climbing procedure. Constantly thinking about altitude (I'm guessing) would be distracting at best. Does that mean the climber should forget about altitude? No! Keeping the criteria of altitude in mind--occasionally glancing to the top of the mountain--would be very useful for choosing good climbing tactics, even (especially) if you're not thinking about it all the time.

I'm bringing this up because I think this is one way in which we've been talking past each other:

  • I claim that expected value maximization is correct and useful as a criteria of rightness. When you suggest rejecting it, that leaves me looking for alternative criteria of rightness--looking for alternative answers to the question "what makes a decision right/good, if it's made under uncertainty?"
  • You've been pointing out that expected value maximization is terrible as a decision procedure, and you've been proposing alternative decision procedures.

As far as I can tell, this post proposes alternate epistemologies and decision procedures, but it doesn't propose an alternative criteria of rightness. So the important role of a criteria of rightness remains unfilled, leaving us with no principled grounds from which to criticize decisions made under uncertainty or potential procedures for making such decisions.

 

Loose ends:

 

Loose end: problems vs paradoxes

Hence, paradoxes lurking outside bayesian epistemology are the reason one can never leave it, but paradoxes lurking inside are exciting research opportunities.

Nice, this one made me laugh

 

Loose end: paradoxes

Other paradoxes within bayesian epistemology include the Necktie paradox, the St. Petersburg paradox, Newcomb’s paradox, Ellsberg Paradox, Pascal’s Mugging, Bertrand’s paradox, The Mere addition paradox (aka “The Repugnant Conclusion”), The Allais Paradox, The Boy or Girl Paradox, The Paradox Of The Absent Minded Driver (aka the “Sleeping Beauty problem”), and Siegel’s paradox.

I'd argue things aren’t that bad.

  • At least the St. Petersburg paradox, Newcomb’s problem, the “repugnant” conclusion, the Boy or Girl Paradox, and the Sleeping Beauty problem arguably have neat solutions:
  • The Ellsberg and Allais “paradoxes” refute the claim that people are in fact perfect expected value maximizers, but they don’t refute a different claim: that people should be--and that we often roughly are--expected value maximizers (while being subject to cognitive biases like ambiguity aversion)

Also, to make sure we’re on the same page about this - many of these paradoxes (e.g. the Pasadena game) seem to be paradoxes with how probabilities are used rather than with how they’re assigned. That doesn’t invalidate your point, although maybe it makes the argument as a whole fit together less cleanly (since here you’re criticizing “Bayesian decision making,” if you will, while later you focus on criticizing Bayesian epistemology).

 

Loose end: supposed alternative decision theories

Despite [expected value theory’s] many imperfections, what explicit alternative is there?

Here are some alternatives.

Unless I'm missing something, none of these seem to be alternative decision theories, in the sense discussed above. To elaborate:

A two-valued representation of uncertainty, like the Dempster-Shafer theory, lets one express uncertainty about both A and -A

I have several doubts about this approach:

  • It’s not an alternative decision theory
  • It doesn’t seem to resolve e.g. problems with positive credences in infinite values
  • To the extent that the sum of A and -A doesn’t equal one, Dutch book arguments still apply.
    • You argue compellingly that one can just drop the credence assumption to avoid Dutch book arguments in the context of beliefs, but--as I’ve tried to argue--it’s harder (and more important?) to avoid Dutch books in the context of decisions/bets.
      • We don’t need to keep theorizing here. To resolve this, please tell me the odds at which you’re willing to buy/sell bets that this will happen, and the odds at which you’re willing to buy/sell bets that it won’t happen. Then we (and your wallet) get to find out if the supposed laws of rationality depend on our assumptions :)

Alternative logics one might use to construct a belief theory include fuzzy logics, three-valued logic, and quantum logic, although none of these have been developed into full-fledged belief theories. 

Fascinating stuff, but many steps away from an alternative approach to decision making.

Making Do Without Expectations: Some people have tried to rehabilitate expected utility theory from the plague of expectation gaps.

You make a compelling case that these won’t help much.

 

Loose end: tools from critical rationalism

For important decisions, beyond simple ones made while playing games of chance, I use tools I’ve learned from critical rationalism, which are things like talking to people, brainstorming, and getting advice.

+1 for these as really useful things that expected value theory under-emphasizes (which is fine because EV is a general criteria of rightness, not a general decision procedure).

 

Loose end: authoritarian supercomputers

[...] So the question is: Would you always do what you’re told?

Do you really buy this argument? A pretty similar argument could get people riled up against the vile tyranny of calculators.

Apparent problems with this thought experiment:

  • It trivializes what’s at stake in our decision making--my feelings of autonomy and self-actualization are far less important than the people I could help with better decision making.
    • See this for a similar sentiment. (I’m not endorsing its arguments.)
  • I suspect much of my intuitive repulsion to this thought experiment comes from its (by design?) similarity to irrelevant real-world analogues, e.g. cult leaders who aren’t exactly ideal decision makers.
  • Just because critical reasoning is being practiced by some reasoner other than me (by the supercomputer, in this hypothetical) doesn’t mean it’s not being practiced.

 

Loose end: evolutionary decision making

Yes, it’s completely true that decisions made according to this framework are made up out of thin air (so it is with all theories) - we can view this as roughly analogous to the mutation stage in Darwinian evolution. Then, once generated, we subject the ideas to as much criticism from ourselves and others as possible - we can view this as roughly analogous to the selection stage.

As I’ve tried to argue, my point is that, without EV maximization, we lack plausible grounds for criticizing decisions amidst uncertainty. Criticism of decisions made under uncertainty must logically begin with premises about what kinds of decisions under uncertainty are good ones, and you’ve rejected popular premises of this sort without offering tenable alternatives.

Comment by Mauricio on Notes on EA-related research, writing, testing fit, learning, and the Forum · 2021-03-28T00:41:42.502Z · EA · GW

Thanks, Michael!

Another opportunity that just came out is the Stanford Existential Risks Initiative's summer research program - people can see info and apply here. This summer, we're collaborating with researchers at FHI, and all are welcome to apply.

Comment by Mauricio on Books on authoritarianism, Russia, China, NK, democratic backsliding, etc.? · 2021-03-16T06:23:30.566Z · EA · GW

Also, here's a reading list on democratic backsliding, recently posted as a comment by Haydn Belfield.

Comment by Mauricio on What is the argument against a Thanos-ing all humanity to save the lives of other sentient beings? · 2021-03-08T00:59:53.681Z · EA · GW

Some additional arguments:

  • This one, arguing that humanity's long-term future will be good
  • These, arguing that we should be nice/cooperative with others' value systems
  • In practice, violence is often extremely counterproductive; it often fails and then brings about massive cultural/political/military backlash against whatever the perpetrator of violence stood for.
    • Examples of violence that appear to have been counterproductive:
    • (Of course, violence doesn't always fail, but it seems to backfire often enough that people should be very wary of it even if they have no other moral qualms with it. And violence seems especially likely to fail and backfire when it's aimed at "Thanos-ing all humanity," since humans are pretty resilient.)
Comment by Mauricio on How to run a high-energy reading group · 2021-03-06T06:14:33.218Z · EA · GW

Thanks for this! I especially appreciated the recommendations for doing 2-person reading groups, and for having presentations include criticisms.

On top of your recommendations, here's a few additional ideas that have worked well for reading groups I've participated in/helped organize, in case others find them useful. (Credit to Stanford's The Precipice and AI Safety reading groups for a bunch of these!)

  • Break up a large group into small groups of ~3-4 people for discussion
    • This avoids large-group discussions, which are often bad (especially over Zoom).
  • Have readings be copied into Google Docs, with a few bolded lines at the top encouraging people to add a few comments in the doc.
    • This prompts people to generate thoughts on the material, and it adds a few interesting ideas to the reading.
  • Have participants vote on which questions to discuss: digitally share a Google Doc with potential discussion questions, then give people ~5 minutes to write "+1" next to all questions they'd like to discuss.
    • Small groups can do this to decide what to have as the focus point of their conversation.
    • Alternatively, organizers can use this to break a large group into small groups based on people's interests, like this (adapted for Zoom times):
      • The organizer encourages people to add & vote on questions for ~5 min.
      • The organizer identifies the most popular questions--enough of them that each small group could discuss a different one if they wanted to.
      • The organizer communicates to the group which questions were most popular, and labels each of these questions with a number.
      • The organizer encourages people who are especially interested in some of the questions to indicate this (e.g. by messaging a number to the Zoom chat).
      • The organizer creates groups of 3-4 people, trying to put together people who indicated interest in the same question.
  • When generating discussion questions, lean away from very vague or big-picture questions.
    • Very specific questions (which might  be sub-questions of big-picture questions) seem to lead to much more fruitful discussion.
Comment by Mauricio on What Helped the Voiceless? Historical Case Studies · 2021-02-08T05:38:00.458Z · EA · GW

I see, thanks for clarifying these points.

I think this mostly (although not quite 100%) addresses the two concerns that you raise

Could you expand on this? I see how policy makers could realize very long-term value with ~30 yr planning horizons (through continual adaptation), but it's not very clear to me why they would do so, if they're mainly thinking about the long-term interests of future [edit: current] generations. For example, my intuition is that risks of totalitarianism or very long-term investment opportunities can be decently addressed with decades-long time horizons for making plans (for the reasons you give), but will only be prioritized if policy makers use even longer time horizons for evaluating plans. Am I missing something?

Comment by Mauricio on AMA: We Work in Operations at EA-aligned organizations. Ask Us Anything. · 2021-02-04T21:11:36.300Z · EA · GW

[2/2]

I'm also curious:

  • What makes collaborations with other kinds of organizations (non-EA orgs) successful at building connections/mutual support between orgs?
  • Other operations-related things you think might be useful for EA group organizers

Thanks!

Comment by Mauricio on AMA: We Work in Operations at EA-aligned organizations. Ask Us Anything. · 2021-02-04T21:10:46.427Z · EA · GW

[1/2]

Thanks for doing this! Do you have any advice for EA group organizers (especially university groups), based on your experience with operations at other kinds of organizations? Areas I'm curious about include:

  • How can EA groups grow their teams and activities while maintaining good team coordination and management?
  • What relatively low-cost things can leadership do, if any, that go far in improving new team members' (especially volunteers') morale/engagement/commitment/initiative?
  • How can experienced EA groups best provide organizational support for new/small ones?
Comment by Mauricio on Yale EA’s Fellowship Application Scores were not Predictive of Eventual Engagement · 2021-02-03T20:05:29.595Z · EA · GW

Thanks, Thomas!

These generally seem like very relevant criteria, so I'm definitely surprised by the results.

The only part I can think of that might have contributed to lower predictiveness of engagement is the "experience" criteria--I'd guess there might have been people who were very into EA since before the fellowship, and that this made them both score poorly on this metric and get very involved with the group later on. I wonder what the correlations look like after controlling for experience (although it's probably not that different, since it was only one of seven criteria).

I'm also curious: I'd guess that familiarity with EA-related books/blogs/cause areas/philosophies is a strong (positive) predictor of receptiveness to EA. Do you think this mostly factored into the scores as a negative contributor to the experience score, or was it also a big consideration for some of the other scores?

Comment by Mauricio on Books on authoritarianism, Russia, China, NK, democratic backsliding, etc.? · 2021-02-03T01:07:17.061Z · EA · GW

Thanks! Also interested in this.

This syllabus from a class on authoritarian politics might be useful. I'm still going through it, but I found these parts especially interesting (some are papers rather than books, but hopefully close enough):

  • "What Do We Know About Democratization After Twenty Years?" (Geddes, 1999)
    • Discusses the relative longevity of different kinds of authoritarian regimes
  • "Civil Society and the Collapse of the Weimar Republic" (Berman, 1997)
    • On how the Nazi Party used civic associations to expand its power in the Weimar Republic
  • Parts of Totalitarian and Authoritarian Regimes (Linz, 1975), especially from ch. 2:
    • Pp. 65-71 on definitions of totalitarianism
    • Pp. 129-136 on criticisms of the concept of totalitarianism
    • P. 137 has a list of earlier scholarly work on democratic backsliding (pretty old though)
  • Development as Freedom (Sen, 1999), especially pp. 178-88
    • On the fact that “There has never been a famine in a functioning multiparty democracy”

Also:

  • Economic Origins of Dictatorship and Democracy (Acemoglu and Robinson, 2005)
    • Historical case studies and model of transitions to (and from) authoritarianism
    • I really liked ch. 2 as an overview
Comment by Mauricio on Yale EA’s Fellowship Application Scores were not Predictive of Eventual Engagement · 2021-02-02T23:49:33.025Z · EA · GW

Thanks for posting this! If you're comfortable sharing this, could you say more about how you ranked applicants in the first place?

Comment by Mauricio on What Helped the Voiceless? Historical Case Studies · 2021-01-25T01:53:48.075Z · EA · GW

(Continuing my other comment)

bidding time does not look like doing nothing.  I think it is useful for advocates to be working on other aligned goals during this time in order to continue to build traction and connections

Yes! 

it  would be so practically difficult to pull of getting those future commitments to be binding or to happen at all that I am not sure it is that useful a tactic. Also [...] a few years (2-5) years might be the limit here

This mostly makes sense to me. Thinking back on the examples I raised to argue for this tactic, I might have over-estimated their strength as examples for the feasibility of such trades: 

  • Abolition often took decades to be fully implemented, but this was more often through legislation that established gradual change than through legislation that established sharp deadlines in the distant future
  • Distant (and not-so-distant) climate targets haven't been extremely successful

I would break down representation of future generations into a few deferent topics [...] Each of these topics could be championed on the world stage by a different nation.

Interesting, this  is also an idea that's new to me and seems right.

---

I'm also curious: how feasible/desirable do you consider one of the other suggestions I made? This one:

Any Future Generations institution should be explicitly mandated to consider long-term prosperity, in addition to existential risks arising from technological development and environmental sustainability [...] advocates of future generations can lastingly diminish the opposition of business interests—or turn it into support—by designing pro-future institutions so that they visibly contribute to areas where future generations and far-sighted businesses have common interests, such as long-term trends in infrastructure, research and development, education, and political/economic stability.

Comment by Mauricio on What Helped the Voiceless? Historical Case Studies · 2021-01-25T01:53:11.272Z · EA · GW

Thanks so much for your comment! It's great to hear your perspective, and I'm glad my post could be a helpful resource.

30 year + planning is impractically difficult in most cases [...] So, if political systems were making the best decisions over a 30+ year time horizon [...] I think this would cover roughly 95%+ of the policies that a strong longtermist would want to see in the world that are not already happening.

Very interesting, I hadn't thought much about this. One hesitation I have is that, as I understand it, strong longtermism can be acted on only when the very long-term effects of our actions are somewhat predictable. So I'm confused by the premise that 30+ year effects aren't predictable--don't we want longtermist policy making precisely when that premise doesn't hold?

Maybe I've misunderstood, and what you're saying is the following?

  • We can sometimes predict the 30+ year effects of an action, but only when these effects are so obvious that they're similar to the <30 year effects.
    • E.g. The things that make brutal totalitarianism terrible for the very long-term future also make it terrible for the next few decades
  • Therefore there's a lot of overlap between the policies that (as best as we can tell) have the best effects on the world in 30+ years, and those that have the best effects on the world in 0-30 years.
  • Therefore we can achieve most longtermist policy goals by getting policies that are best for the world over the next 0-30 years.

This seems mostly right to me, although I'd still be worried that shifting the focus entirely to current generations would have some important limitations:

  • Advocates of the long-term interests of current generations might not prioritize the most important very long-term issues
    • E.g. a 1% risk of extinction in a century might be really bad if you're thinking about future generations, but not so important (potentially: not worth the political capital, or the costs of mitigation) if you're thinking about current generations
  • Advocates of the long-term interests of current generations would pay no attention to benefits that take >30 years to realize, which might be very important
    • Future governments might have the option to fund long-term projects that wouldn't pay off for several decades (e.g. sending people to Mars or more distant planets?), and much of the potential future value of humanity might depend on governments' abilities to make such long-term investments
    • On the other hand, if improvements in nutrition and medicine continue to extend people's lifespans, then the time horizon of serving current generations' long-term interests might expand a lot by default

That said, any political change that's feasible in the near term will have significant limitations, and I'm generally pretty optimistic about your strategic proposal of focusing more on long-term benefits to current generations. This seems like a really big takeaway that I totally missed in my initial writeup, so I'm adding a link to your comment on the main post.

Comment by Mauricio on The Folly of "EAs Should" · 2021-01-08T10:50:34.424Z · EA · GW

Hi, thanks for your comment!

Good points--many cultural establishments are valuable in ways that calculations of lives saved miss, and the part of the situation you describe would be worse if people don't donate to museums. I'm still worried by this: if we don't (for example) donate to the purchase of bednets that protect people from malaria, then more kids will die of preventable diseases, which would also be a worse situation. So I'm not sure I understand where you're coming from here--it seems to me that any good cause we don't donate to will be worse off if we don't donate to it, so noticing this about some cause won't go far in helping us find the best opportunities to help others.

Comment by Mauricio on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-28T09:35:57.974Z · EA · GW

Hey Vaden, thanks!

these two things become intertwined when a philosophy  makes people decide to stop creating knowledge

Yeah, fair. (Although less relevant to less naive applications of longterimsm, which as Ben puts it draw some rather than all of our attention away from knowledge creation.)

Both approaches pass on the buck

I'm not sure I see where you're coming from here. EV does pass the buck on plenty of things (on how to generate options, utilities, probabilities), but as I put it, I thought it directly answered the question (rather than passing the buck) about what kinds of bets to make/how to act under uncertainty:

we should be willing to bet on X happening in proportion to our best guess about the strength of the evidence for the claim that X will happen.

Also, regarding this:

And one doesn't  necessarily need to answer your question, because  there's no requirement that  the criticism take EV form

I don't see how that gets you out of facing the question. If criticism uses premises about how we should act under uncertainty (which it must do, to have bearing on our choices), then a discussion will remain badly unfinished until it's scrutinized those premises. We could scrutinize them on a case-by-case basis, but that's wasting time if some kinds of premises can be refuted in general.

Comment by Mauricio on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-28T07:15:26.383Z · EA · GW

2. 

Another worry is that probabilities are so useful that we won't find a better alternative.

I think of probabilities as language for answering the earlier basic question of "What bets should I make?" For example, "There's a 25% chance (i.e. 1:3 odds) that X will happen" is (as I see it) shorthand for "My potential payoff better be at least 3 times bigger than my potential loss for betting on X to be worth it." So probabilities express thresholds in your answers to the question "What bets on event X should I take?" That is, from a pragmatic angle, subjective probabilities aren't supposed to be deep truths about the world; they're expressions of our best guesses about how willing we should be to bet on various events. (Other considerations also make probabilities particularly well-fitting tools for describing our preferences about bets.)

So rejecting the use of probabilities (as I understand them) under severe uncertainty seems to have an unacceptable, maybe even absurd, conclusion: the rejection of consistent thresholds for deciding whether to bet on uncertain events. This is a mistake--if we accept/reject bets on some event without a consistent threshold for what reward:loss ratios are worth taking, then we'll necessarily be doing silly things like refusing to take a bet, and then accepting a bet on the same event for a less favorable reward:loss ratio.

You might be thinking something like "ok, so you can always describe an action as endorsing some betting threshold, but that doesn't mean it's useful to think about this explicitly." I'd disagree, because not recognizing our betting threshold makes it harder to notice and avoid mistakes like the one above. It also takes away clarity and precision of thought that's helpful for criticizing our choices, e.g. it makes an extremely high betting threshold about the value of x-risk reduction look like agnosticism.

Thanks again for your thoughtful post!

Comment by Mauricio on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-28T06:49:02.635Z · EA · GW

Hey Ben, thanks a lot for posting this! And props for having the energy to respond to all these comments :)

I'll try to reframe points that others have made in the comments (and which I tried to make earlier, but less well): I suspect that part of why these conversations sometimes feel like we're talking past one another is that we're focusing on different things.

You and Vaden seem focused on creating knowledge. You (I'd say) correctly note that, as frameworks for creating knowledge, EV maximization and Bayesian epistemology aren't just useless--they're actively harmful, because they distract us from the empirical studies, data analysis, feedback loops, and argumentative criticism that actually create knowledge. 

Some others are focused on making decisions. From this angle,  EV maximization and Bayesian epistemology aren't supposed to be frameworks for creating knowledge--they're frameworks for turning knowledge into decisions, and your arguments don't seem to be enough for refuting them as such.

To back up a bit, I think probabilities aren't fundamental to decision making. But bets are. Every decision we make is effectively taking or refusing to take a bet (e.g. going outside is betting that I won't be hit in the head by a meteor if I go outside). So it's pretty useful to have a good answer to the question: "What bets should I take?"

In this context, your post isn't convincing me because I don't have a good alternative to my current answer to that question (roughly, "take bets that maximize EV"), and because I think that in an important way there can't be a good alternative.

1.

One of the questions your post leaves me with is: What kinds of bets do you think I should I take, when I'm uncertain about what will happen? i.e. How do you think I should make decisions under uncertainty?

Maximizing EV under a Bayesian framework offers one answer, as you know, roughly that: we should be willing to bet on X happening in proportion to our best guess about the strength of the evidence for the claim that X will happen.

I think you're right in pointing that this approach has significant weaknesses: it has counterintuitive results when used with some very low probabilities,  it's very sensitive to arbitrary judgements and bias, and our best guesses about whether far-future events will happen might be totally uncorrelated with whether they actually happen. (I'm not as compelled by some of your other criticisms, largely for reasons others' comments discuss.)

Despite these downsides, it seems like a bad idea to drop my current best guess about "what kinds of bets should I take?" until I see a better answer. (Vaden offers a promising approach to making decisions, but it just passes the buck on this--we'll still need an answer to my question when we get to his step 2.) As your familiarity with catastrophic dictatorships suggests, dumping a flawed status quo is a mistake if we don't have a better alternative.

Comment by Mauricio on A case against strong longtermism · 2020-12-27T00:10:14.659Z · EA · GW

Thanks!

Offered a bet that pays $X if I pick a color and then see if a random ball matches that color, you'll pay more

I'm not sure I follow. If I were to take this bet, it seems that the prior according to which my utility would be lowest is: you'll pick a color to match that gives me a 0% chance of winning. So if I'm ambiguity averse in this way, wouldn't I think this bet is worthless?

(The second point you bring up would make sense to me if this first point did, although then I'd also be confused about the papers' emphasis on commitment.)

Comment by Mauricio on A case against strong longtermism · 2020-12-26T19:22:42.569Z · EA · GW

Hi Zach, thanks for this!

I have two doubts about the Al-Najjar and Weinstein paper--I'd be curious to hear your (or others') thoughts on these.

First, I'm having trouble seeing where the information aversion comes in. A simpler example than the one used in the paper seems to be enough to communicate what I'm confused about: let's say an urn has 100 balls that are each red or yellow, and you don't know their distribution. Someone averse to ambiguity would (I think) be willing to pay up to $1 for a bet that pays off $1 if a randomly selected ball is red or yellow. But if they're offered that bet as two separate decisions (first betting on a ball being red, and then betting on the same ball being yellow), then they'd be willing to pay less than $0.50 for each bet. So it looks like preference inconsistency comes from the choice being spread out over time, rather than from information (which would mean there's no incentive to avoid information). What am I missing here?

(Maybe the following is how the authors were thinking about this? If you (as a hypothetical ambiguity-averse person) know that you'll get a chance to take both bets separately, then you'll take them both as long as you're not immediately informed of the outcome of the first bet, because you evaluate acts, not by their own uncertainty, but by the uncertainty of your sequence of acts as a whole (considering all acts whose outcomes you remain unaware of). This seems like an odd interpretation, so I don't think this is it.)

[edit: I now think the previous paragraph's interpretation was correct, because otherwise agents would have no way to make ambiguity averse choices that are spread out over time and consistent, in situations like the ones presented in the paper. The 'oddness' of the interpretation seems to reflect the oddness of ambiguity aversion: rather than only paying attention to what might happen differently if you choose one action or another, ambiguity aversion involves paying attention to possible outcomes that will not be affected by your action, since they might influence the uncertainty of your action.]

Second, assuming that ambiguity aversion does lead to information aversion, what do you think of the response that "this phenomenon simply reflects a [rational] trade-off between the intrinsic value of information, which is positive even in the presence of ambiguity, and the value of commitment"?

Comment by Mauricio on A case against strong longtermism · 2020-12-20T20:51:15.299Z · EA · GW

Hi Vaden, thanks again for posting this! Great to see this discussion. I wanted to get further along C&R before replying, but:

no laws of physics are being violated with the scenario "someone shouts the natural number i".  This is why this establishes a one-to-one correspondence between the set of future possibilities and the natural numbers

If we're assuming that time is finite and quantized, then wouldn't these assumptions (or, alternatively, finite time + the speed of light) imply a finite upper bound on how many syllables someone can shout before the end of the universe (and therefore a finite upper bound on the size of the set of shoutable numbers)? I thought Isaac was making this point; not that it's physically impossible to shout all natural numbers sequentially, but that it's physically impossible to shout any of the natural numbers (except for a finite subset).

(Although this may not be crucial, since I think you can still validly make the point that Bayesians don't have the option of, say, totally ruling out faster-than-light number-pronunciation as absurd.)

Note also that EV style reasoning is only really popular in this community. No other community of researchers reasons in this way, and they're able to make decisions just fine.

Are they? I had the impression that most communities of researchers are more interested in finding interesting truths than in making decisions, while most communities of decision makers severely neglect large-scale problems (e.g. pre-2020 pandemic preparedness, farmed animal welfare). (Maybe there's better ways to account for scope than EV, but I'd hesitate to look for them in conventional decision making.)

Comment by Mauricio on [Short Version] What Helped the Voiceless? Historical Case Studies · 2020-12-18T07:42:52.618Z · EA · GW

Thanks for your comment!

hope I wasn't too annoying!

Nah :)

Are you counting cases where there are intra-elite battles for power [...] Not sure how broad "strategic alliances" are referring to.

What I have in mind is: cases when elite group A included group B, because group A thought that group B would use its new influence in ways beneficial for group A. I wouldn't count the example you mention, because then the benefit seems to come from the exploiters being weakened (not being able to charge such low prices), rather than from the new influence of the formerly excluded.

(I'm trying to distinguish between inclusion that comes from the influence of the excluded, and inclusion that doesn't, because only the latter could help groups like future generations.)

The dynamic you bring up does seem important. I'd currently put it in the miscellaneous bucket of "costs of inclusion" (as a negative cost--a benefit for elites). I wonder if there's some better way to think about it?

Comment by Mauricio on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2020-11-20T00:44:38.936Z · EA · GW

From Larissa MacFarquhar's Strangers Drowning:

"What do-gooders lack is not happiness but innocence. They lack that happy blindness that allows most people, most of the time, to shut their minds to what is unbearable. Do-gooders have forced themselves to know, and keep on knowing, that everything they do affects other people, and that sometimes (though not always) their joy is purchased with other people’s joy. And, remembering that, they open themselves to a sense of unlimited, crushing responsibility.”

"This is the difference between do-gooders and ordinary people: for do-gooders, it is always wartime. They always feel themselves responsible for strangers — they always feel that strangers, like compatriots in war, are their own people. They know that there are always those as urgently in need as the victims of battle, and they consider themselves conscripted by duty.”

“Do-gooders learn to codify their horror into a routine and a set of habits they can live with. They know they must do this in order to stay sane. But this partial blindness is chosen and forced and never quite convincing.”

Comment by Mauricio on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2020-11-20T00:42:36.898Z · EA · GW

"Have you ever experienced a moment of bliss? On the rapids of inspiration maybe, your mind tracing the shapes of truth and beauty? Or in the pulsing ecstasy of love? Or in a glorious triumph achieved with true friends? Or in a conversation on a vine-overhung terrace one star-appointed night? Or perhaps a melody smuggled itself into your heart, charming it and setting it alight with kaleidoscopic emotions? Or when you prayed, and felt heard?  

... you may have discovered inside it a certain idle but sincere thought: 'Heaven, yes! I didn’t realize it could be like this. This is so right, on whole different level of right; so real, on a whole different level of real. Why can’t it be like this always? Before I was sleeping; now I am awake.'  

...

Quick, stop that door from closing! Shove your foot in so it does not slam shut.

And let the faint draught of the beyond continue to whisper... the tender words of what could be!"  

- Nick Bostrom, Letter from Utopia

Comment by Mauricio on What Helped the Voiceless? Historical Case Studies · 2020-10-21T11:50:45.714Z · EA · GW

[edited for relative brevity]

Thanks a lot for your thoughtful critique!

I wasn’t quite sure how this followed from the historical evidence that you examine, but I thought it was a cool argument... If we care about, say, maximising the chances that factory farming ends... then we might be able to effectively trade immediacy for increased radicalism (or durability...).

I'd argue that the historical evidence I looked at provides some support for this, although it's not very decisive. Abolitionists sometimes (e.g. in New England colonies) succeeded in passing bills that would abolish slavery after a long time, e.g. bills that didn't free any slaves but did ban the enslavement of slaves' future children. That said, I tentatively buy the argument mostly on theoretical grounds.

————

I'd summarize your main concern in the following way--please let me know if I've misunderstood (edit: removed block quote format; didn't mean to imply this was a quote):

The report looks at different kinds of case studies: ally-based movements, self-advocacy movements, and movements that accidentally benefited excluded groups. However, for people interested in assessing the prospects of today's ally-based movements, case studies of ally-based movements are much more relevant than case studies of other kinds of movements. Democratization was not an ally-based movement, while genetic engineering governance and environmentalism were not movements of people who intended to benefit future generations. So those case studies say little about how successful ally-based movements tend to be.

I mostly agree with this. However, it's not clear to me how

this critique of the methodology... directly bears on one of the main arguments you advance in this research: "inclusive values" were not that important in driving change, which suggests that further MCE is not as likely as a simple extrapolation from the trend towards expanded moral circles in the past few centuries might imply.

If you're optimistic about today's ally-based movements because of historical successes of ally-based movements, then I agree that the argument I make shouldn't diminish your optimism by much. Such optimism seems like legitimate, relatively fine-grained extrapolation (especially if these historical successes happened in the face of major, economically motivated opposition). 

The kind of extrapolation I'm arguing against is (as you suggest) simpler extrapolation: assuming that policy change which has greatly benefited excluded groups has generally happened in ways that are very relevant for the future of totally voiceless groups. 

Your focus on ally-based movements makes me think that you weren't practicing this simple extrapolation. Still, before this research, I think I was doing that, and it seems that such reasoning is fairly common in (and out of) EA circles. 

Selecting case studies with the broad criteria of "global policy shifts that greatly benefited excluded groups" seems to make a lot of sense for this particular goal: figuring out how legitimate it is to simply extrapolate from such policy shifts. This also seems to make more sense given my focus on outcomes, than it would if I were focused on movements.

As a last point, one other thing we agree on seems to be that developments like democratization largely weren't ally-based movements. We might ask: why weren't they? The fact that they weren't--that it usually took revolutionary threats to bring about democracy--seems to be an argument against expecting much from human empathy and ethical reasoning when lots of money is at stake. In other words, ally-based movements' relative absence from several of these case studies tells us something important about ally-based movements: apathy and limited civil liberties have often kept them from even emerging. (On the other hand, maybe the presence of large ally-based movements for e.g. farmed animals suggests that we're in a very different situation.)

Curious to hear your thoughts! I'd also love to hear other constructive feedback/advice for doing better historical work in the future, if you have any off the top of your head.

Comment by Mauricio on What Helped the Voiceless? Historical Case Studies · 2020-10-17T05:49:14.581Z · EA · GW

(Following up on my other reply)

Any [. . .] hot takes on the quality of current EA epistemics?

It seems that many EAs have adopted Singer's expanding circle narrative for thinking about important questions, without much scrutiny, and despite how Singer's historical narrative is arguably highly incomplete (Singer's relevant book also wasn't trying to make a thorough historical argument). This suggests that we're not giving enough scrutiny to other arguments from high-profile EAs, and that we pay too much attention to academic work that happens to come from EAs (even when it's about questions like "why have many societies become more inclusive?"--questions that aren't just of interest to EA-sympathetic researchers).

do you have thoughts on how much predictive (postdictive?) power your framework has on other randomly generated case studies?
Relatedly, do you think it's likely that you will change your mind a lot if you read five more analogous case studies in a similar level of detail? What probability will you assign to reversing one of the core conclusions were you to do so?

Thoughts:

  • This framework seems (retroactively) predictively powerful for abolition and democratization in many countries. From a distance, it seems roughly predictive of other cases (e.g. factory farming, genocide), although there's some cases that it seems to get wrong (e.g. it's not clear to me what the economic incentives for decolonization were). It also seems less predictively useful when incentives seem balanced enough that predictions are ambiguous.
  • I'd be surprised but not shocked if I changed my mind about any given core conclusion. Maybe 30%? (Overall probability of reversing one core conclusion would depend on how narrowly we're thinking of "core conclusion.")
    • The main way that it seems like I could be wrong would be something like "under the right circumstances, social values are more influential than strong economic incentives."
  • I'd be shocked if it turned out that social values are of dominant importance, and economic motives don't matter much, for bringing about political inclusion/exclusion. That would require explaining away lots of historical evidence. 8%?
  • My median expectation is that I'd roughly keep the core conclusions and framework, add additional factors that contribute to one outcome or the other (additional ways in which political actors can be economically incentivized to support inclusion/exclusion), and change lots of finer details.
Comment by Mauricio on What Helped the Voiceless? Historical Case Studies · 2020-10-17T03:32:44.343Z · EA · GW

Hi Linch, thanks so much! I'll reply to your first several bullet points here.

Good point about making the shorter version a separate post. I might do that.

At the high level, how has your opinions on political inclusion/exclusion changed as a result of doing this research? […] Any high-level takeaways [. . .] ?

I don't think I had very clear/precise opinions on political inclusion/exclusion before this research. But here's some high-level takeaways/ways in which I changed my mind:

  • Theory of change/heuristics about what kinds of things drive political progress:
    • I'm no longer fairly optimistic about value changes on their own when there are other big incentives at play, and I'm now fairly optimistic about value changes on their own when there aren't other big incentives at play.
    • I'm now optimistic about looking for clever political strategies, e.g. a policy you can advocate that divides the opposition, or a policy that would spread internationally through a positive feedback loop. (Before, I hadn't considered this option much.)
  • Methodology:
    • My original plan had been to try to predict future moral circle expansion (MCE) by graphing historical trends in MCE, and naively extrapolating them. I'm glad I ended up looking for causal explanations instead, since these helped me figure out when it would be useful, and when it would be misleading, to extrapolate past trends in MCE.
    • Before looking at these case studies, I spent a lot (~40%?) of my research time reading up on various more theoretical fields that seemed relevant (e.g. psych, IR). They ended up being a lot less helpful than I had expected . If I were to do a similar research project, I'd first look into case studies, and then decide which other sub-fields (if any) would be useful (since then, I'd have a better sense of what info and ideas would be helpful).
    • I found mentorship (which took the form of weekly memos for and chats with Prof. Weinstein, as well as initially creating a list of readings for each week) really helpful for time management, research design, and exposure to a different perspective.
      • [edit: Having a mentor who isn't involved in EA seems like it was especially helpful for getting "common knowledge" that isn't very common in EA, especially the idea that most civil rights movements have been driven by marginalized groups themselves. Still, I'd guess EA-involved mentors would be especially valuable for researchers who are relatively new to EA ideas or are researching topics that mostly just interest EAs.]
    • Over the course of this research, I drifted somewhat from my original research goals, maybe due to a mix of forgetting them, locally optimizing, and letting myself be too influenced by my mentor/mistaking my research proposal for my goals. This seems to have worked out fine, but in the future I'd write out my goals, and regularly (each week?) adjust what I'm doing to better meet them.
    • My research reinforced my thinking that, for learning about general trends and why things happened, reading from political scientists and economists is often more useful than reading from historians.
    • I was surprised by the predictive power (especially in Economic Origins of Dictatorship and Democracy) of assuming that organized interests mostly act rationally, with the goal of advancing their own economic interests. This change of mind made me take on board the assumption as a core assumption of my model.
    • Looking at how similar things happened in many different countries seems to have been helpful for having a better-informed idea of what trends are general trends.
Comment by Mauricio on What Helped the Voiceless? Historical Case Studies · 2020-10-13T20:19:29.123Z · EA · GW

Hi Michael, thanks for your comment and for sharing this post!

I chose this topic a little before your post came out, so I probably would have researched this anyway. I did find your post encouraging :)