Posts

Giving and receiving feedback 2020-09-07T07:24:33.941Z
What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? 2020-08-13T09:15:39.622Z
Max_Daniel's Shortform 2019-12-13T11:17:10.883Z
When should EAs allocate funding randomly? An inconclusive literature review. 2018-11-17T14:53:38.803Z

Comments

Comment by max_daniel on Khorton's Shortform · 2021-01-26T10:01:37.706Z · EA · GW

I really like the idea of working on a women's issue in a global context.

Me too. I'm also wondering about the global burden of period pain, and the tractability of reducing it. Similar to menopause (and non-gender-specific issues such as ageing), one might expect this to be neglected because of a "it's natural  and not a disease, and so we can't or shouldn't anything about it" fallacy.

Comment by max_daniel on Training Bottlenecks in EA (professional skills) · 2021-01-19T12:51:38.539Z · EA · GW

I'd love to hear any advice from how that charity decided which courses would be best for people to do! Also whether there are any specific ones you recommend (if any are applicable in the UK). 

I'm afraid that I'm not aware of specific courses that are also offered in the UK. 

I think that generally the charity actually didn't do a great job at selecting the best courses among the available ones. However, my suspicion is that conditional on having selected an appropriate topic there often wasn't actually that much variance between courses because most of the benefits come from some generic effect of "deliberately reflecting on and practicing X", with it not being that important how exactly this was one. (Perhaps similar to psychotherapy.)

For courses where all participants were activists from that same charity, I suspect a significant source of benefits was also just collaborative problem solving, and sharing experiences and getting peer advice from others who had faced similar problems.

Another observation is that these courses often involved in-person conversations in small groups, were quite long in total (2 hours to 2 days), and significant use of physical media (e.g. people writing ideas on sheets of paper, and then these being pinned on a wall). By contrast, in my "EA experience" similar things have been done by people spending at most one hour writing in a joint Google doc. I personally find the "non-virtual" variant much more engaging, but I don't know to what extent this is idiosyncratic.

Comment by max_daniel on Training Bottlenecks in EA (professional skills) · 2021-01-18T20:35:54.664Z · EA · GW

Some other hypotheses for what's going on:

  • Perhaps "learning by doing" is generally more effective than trying to improve skills via 'free-floating' workshops or other activities, and EA orgs are better at understanding this.
  • Perhaps low staff retention rates make some EA orgs reluctant to invest into the development of their staff because they worry they won't internalize the benefits.
  • Perhaps EA is culturally too arrogant, i.e. too indiscriminately convinced that it can do better than the rest of the world (which may in fact be true for, say, identifying high-impact donation targets - but this doesn't necessarily generalize).
  • Perhaps there is a cultural difference I'm not aware of. (The student org I mentioned was German, EA's culture is more influenced by the US/UK/international.)
  • Perhaps professional development is valuable as an organizational function primarily in contexts where staff aren't intrinsically motivated to self-improve, and perhaps EAs tend to have that intrinsic motivation anyway.
Comment by max_daniel on Training Bottlenecks in EA (professional skills) · 2021-01-18T20:28:50.219Z · EA · GW

Thank you for sharing your thoughts on this. I agree this is an important topic with potential room for significant improvements.

FWIW, my impression is that I've benefited significantly from both courses and reading books (though of course it's hard to attribute counterfactual impact), particularly on interpersonal skills, leadership, and self/time management.

One observation I find quite striking is that in previous communities and organizations I encountered such training opportunities significantly more often, and felt they were generally more appreciated, than in EA. 

Specifically, during my university years I was involved in a student-run nonprofit, and this nonprofit - while naturally  less 'professional' and less well run than the typical EA organization in various ways - spent significant resources on training and furthering the education of the student activists running it.

These efforts included two yearly events that included workshops (with both internal and external facilitators/instructors) for the org's leadership and all org members/activists, respectively; a member of the executive board one of whose few key responsibilities it was to promote the professional development of activists; and a more fuzzy, cultural appreciation for such matters that led people to frequently sign up for external workshops, apply for grants or external mentorship schemes, etc.

Now, the actual mission of that organization was to promote higher education in post-conflict regions, and today for the broad purpose of improving lives in poor countries I'd donate to any GiveWell-recommended charity over that one in the blink of an eye. But for the purpose of improving my own skills, I think I'd seriously consider going back. In fact, I sadly think that in many ways that organization did a better job at promoting the professional development of its Europe-based student activists than at actually helping people in its target countries.

I was part of that organization for roughly as long as I've been into EA. At that org, I participated in countless workshops on things like time management, leadership styles, active listening, how to give and receive feedback, project management, impact assessment, risk management, preventing corruption, monitoring & evaluation, and many other things. They were hit-and-miss, and some were largely a waste of time in hindsight. But overall I feel like I've learned a lot, and am grateful for many opportunities to pick up and practice many skills I use every day in my current work in EA.

This changed dramatically when I started to work for EA organizations. With the exception of one CFAR workshop - which I found significantly less useful per unit of time - I don't recall participating in any workshop or training opportunity that tapped into external expertise, and only 1-2 'internal' ones. Nor do training opportunities get brought to my attention nearly as often.

(I'm also glossing over significant within-EA variance here. I've worked for two EA employers, and think that one had a culture significantly more conducive to staff development than the other even in the abscence of externally led workshops.)

One big caveat in this story is that the difference might be largely explained by experience/age. It is to be expected that, e.g., someone's first workshop on project management is more useful than later training (diminishing returns). Perhaps EA employers are correctly perceiving that most employees - even if they're recent graduates - have picked up the basics elsewhere, and that investing into further improvements is no longer worth it.

However, I'm skeptical that this is the full explanation. Overall, this aspect of my experience is one significant reason why I'm generally reluctant to enthusiastically recommend work "in EA" compared to work at institutions/orgs with an established track record of people learning/improving a lot there.

More broadly, my impression is that "professional development" or on-the-job training are explicit functions in most larger companies that have dedicated staff and resources. I haven't seen this in EA, though perhaps this is simply explained by most EA orgs being relatively small.

Comment by max_daniel on What is going on in the world? · 2021-01-18T15:37:53.337Z · EA · GW

Thank you, I found this pretty interesting. Of course no single one-sentence narrative will capture everything that goes on in the world, but in practice we need to reduce complexity and focus anyway, and may implicitly adopt similar narratives anyway, so I found it interesting to reflect explicitly on them.

FWIW, the one that resonates most for me personally was:

  • There are risks to the future of humanity (‘existential risks’), and vastly more is at stake in these than in anything else going on (if we also include catastrophic trajectory changes). Meanwhile the world’s thinking and responsiveness to these risks is incredibly minor and they are taken unseriously.

A lot of the ones appealing to 'weird' issues (acausal trade, quantum worlds, simulations, ...) ring true and important to me, but seem less directly relevant to my actual actions.

My reaction to a lot of the 'generic' ones (externalities, wasted efforts, ...) is something like: "This sounds true, but I'm not sure why I should think I'll be able to do something about this."

Comment by max_daniel on My mistakes on the path to impact · 2021-01-07T11:14:49.266Z · EA · GW

Thanks! I'm not sure if there is a significant difference about how we'd actually make decisions (I mean, on prior there is probably some difference). But I agree that the single heuristics I mentioned above doesn't by itself do a great job of describing when and how much to defer, and I agree with your "counterexamples". (Though note that in principle it's not surprising if there are counterexamples to a "mere heuristics".)

I particularly appreciate you describing the "Role expectations" point. I agree that something along those lines is important. My guess is that if we would have debated specific decisions I would have implicitly incorporated this consideration, but I don't think it was clear to me before reading your comment that this is an important property that will often influence my judgment about how much to defer.

Comment by max_daniel on Some Scattered Thoughts on Operations · 2021-01-06T20:41:18.338Z · EA · GW

Yes, FWIW my guess is that at the current margin this would be good in many places (but of course there is considerable within-EA variance, so it won't be the right marginal change everywhere and in every situation).

Comment by max_daniel on AGB's Shortform · 2021-01-06T18:22:08.296Z · EA · GW

If I understand you correctly, the argument is not "autopoietic systems have persisted for billions of years" but more specifically "so far each new 'type' of such systems has persisted, so we should expect the most recent new type of 'information-based civilization' to persist as well".

This is an interesting argument I hadn't considered in this form.

(I think it's interesting because I think the case that it talks about a morally relevant long future is stronger than for the simple appeal to all autopoietic systems as a reference class. The latter include many things that are so weird - like eusocial insects, asexually reproducing organisms, and potentially even non-living systems like autocatalytic chemical reactions - that the argument seems quite vulnerable to the objection that knowing that "some kind of autopoietic system will be around for billions of years" isn't that relevant. We arguably care about something that, while more general than current values or humans as biological species, is more narrow than that. 

[Tbc, I think there are non-crazy views that care at least somewhat about basically all autopoietic systems, but my impression is that the standard justification for longtermism doesn't want to commit itself to such views.])

However, I have some worries about survivorship bias: If there was a "failed major transition in evolution", would we know about it? Like, could it be that 2 billion years ago organisms started doing sphexual selection (a hypothetical form of reproduction that's as different from previous asexual reproduction as sexual reproduction but also different from the latter) but that this type of reproduction died out after 1,000 years - and similarly for sphexxual selection, sphexxxual selection, ... ? Such that with full knowledge we'd conclude the reverse from your conclusion above, i.e. "almost all new types of autopoietic systems died out soon, so we should expect information-based civilization to die out soon as well"?

(FWIW my guess is that the answer actually is "our understanding of the history of evolution is sufficiently good that together with broad priors we can rule out at least an extremely high number of such 'failed transitions'", but I'm not sure and so I wanted to mention the possible problem.)

Comment by max_daniel on AGB's Shortform · 2021-01-06T17:01:34.569Z · EA · GW

my coinflip example is more general than you seem to think. Probability theory has conjunctions even outside of simple fixed models, and it's the conjunction, not the fixed model, which is forcing you to have extreme credences. At best, we may be able to define a certain class of events where such credences are 'forbidden' (this could well be what the paper tries to do).

I agree with everything you say in your reply. I think I simply partly misunderstood the point you were trying to make and phrased part of my response poorly. In particular, I agree that extreme credences aren't 'forbidden' in general.

(Sorry, I think it would have been better if I had flagged that I had read your comment and written mine very quickly.)

I still think that the distinction between credence/probabilities within a model and credence that a model is correct are is relevant here, for reasons such as:

  • I think it's often harder to justify an extreme credence that a particular model is right than it is to justify an extreme probability within a model.
    • Often when it seems we have extreme credence in a model this just holds "at a certain level of detail", and if we looked at a richer space of models that makes more fine-grained distinctions we'd say that our credence is distributed over a (potentially very large) family of models.
  • There is a difference between an extreme all-things-considered credence (i.e. in this simplified way of thinking about epistemics the 'expected credence' across models) and being highly confident in an extreme credence;
    • I think the latter is less often justified than the former. And again if it seems that the latter is justified, I think it'll often be because an extreme amount of credence is distributed among different models, but all of these models agree about some event we're considering. (E.g. ~all models agree that I wont't spontaneously die in the next second, or that Santa Clause isn't going to appear in my bedroom.)
  • When different models agree that some event is the conjunction of many others, then each model will have an extreme credence for some event but the models might disagree about for which  events the credence is extreme.
    • Taken together (i.e. across events/decisions) your all-things-considered credences might look therefore look "funny" or "inconsistent" (by the light of any single model). E.g. you might have non-extreme all-things-considered credence in two events based on two different models that are inconsistent with each other, and each of which rules out one of the events with extreme probability but not the other.

I acknowledge that I'm making somewhat vague claims here, and that in order to have anything close to a satisfying philosophical account of what's going on I would need to spell out what exactly I mean by "often" etc. (Because as I said I do agree that these claims don't always hold!)

Comment by max_daniel on AGB's Shortform · 2021-01-06T09:18:42.640Z · EA · GW

I won't respond to your second/third bullets; as you say it's not a defense of the claim itself, and while it's plausible to me that many conclusions go through on much shorter timelines, I still want to understand the basis for the actual arguments made as best I can. Not least because if I can't defend such arguments, then my personal pitches for longtermism (both to myself and to others) will not include them; they and I will focus on the next e.g. 10,000 years instead. 

To be clear, this makes a lot of sense to me, and I emphatically agree that understanding the arguments is valuable independently from whether this immediately changes a practical conclusion.

Comment by max_daniel on A case against strong longtermism · 2021-01-05T13:42:48.047Z · EA · GW

Just saw this, which sounds relevant to some of the comment discussion here:

We are excited to announce that

@anderssandberg

will give a talk at the OKPS about which kinds of historical predictions are possible and impossible, and where Popper's critique of 'historicism' overshoots its goals.

https://twitter.com/OxfordPopper/status/1343989971552776192?s=20

Comment by max_daniel on AGB's Shortform · 2021-01-05T11:07:11.090Z · EA · GW

I roughly think that there simply isn't very strong evidence for this. I.e. I think it would be mistaken to have a highly resilient large credence in extinction risk eventually falling to ~0.0000001%, humanity or its descendants surviving for a billion years, or anything like that.

[ETA: Upon rereading, I realized the above is ambiguous. With "large" I was here referring to something stronger than "non-extreme". E.g. I do think it's defensible to believe that, e.g. "I'm like 90% confident that over the next 10 years my credence in information-based civilization surviving for 1 billion years won't fall below 0.1%", and indeed that's a statement I would endorse. I think I'd start feeling skeptical if someone claimed there is no way they'd update to a credence below 40% or something like that.]

I think this is one of several reasons for why the "naive case" for focusing on extinction risk reduction fails. (Another example of such a reason is the fact that, for most known hazards, collapse short of extinction seems way more likely than immediate extinction, that as a consequence most interventions affect both the probability of extinction and the probability and trajectory of various collapse scenarios, and that the latter effect might dominate but has unclear sign.)

I think the most convincing response is a combination of the following. Note, however, that the last two mostly argue that we should be longtermists despite the case for billion-year futures being shaky rather than defenses of that case itself.

  • You are correct that within fixed models we can justifiably have extreme credences, e.g. for the probability of a specific result of 30 coin flips. However, I think the case for "modesty" - i.e. not ruling out very long futures - rests largely on model uncertainty, i.e. our inability to confidently identify the 'correct' model for reasoning about the length of the future.
    • For example, suppose I produce a coin from my pocket and ask you to estimate how likely it is that in my first 30 flips I get only heads. Your all-things-considered credence will be dominated by your uncertainty over whether my coin is strongly biased toward heads. Since 30 heads are vanishingly unlikely if the coin is fair, this is the case even if your prior says that most coins someone produces from their pocket are fair: "vanishingly unlikely" here is much stronger (in this case around ) than your prior can justifiably be, i.e. "most coins" might defensibly refer to 90% or 99% or 99.99% but not 99.9999999%.
    • This insight that extremely low credences all-things-considered are often "forbidden" by model uncertainty is basically the point from Ord, Hillerbrand, & Sandberg (2008).
    • Note that I think it's still true that there is a possible epistemic state (and probably even model we can write down now) that rules out very  long futures with extreme confidence. The point just is that we won't be able to get to that epistemic state in practice.
    • Overall, I think the lower bound on the all-things-considered credence we should have in some speculative scenario often comes down to understanding how "fundamental" our model uncertainty is. I.e. roughly: to get to models that have practically significant credence in the scenario in question, how fundamentally would I need to revise my best-guess model of the world?
      • E.g. if I'm asking whether the LHC will blow up the world, or whether it's worth looking for the philosopher's stone, then I would need to revise extremely fundamental aspects of my world model such as fundamental physics - we are justified in having pretty high credences in those.
      • By contrast, very long futures seem at least plausibly consistent with fundamental physics as well as plausible theories for how cultural evolution, technological progress, economics, etc. work.
        • It is here, and for this reason, that points like "but it's conceivable that superintelligent AI will reduce extinction risk to near-zero" are significant.
      • Therefore, model  uncertainty will push me toward a higher credence in a very long future than in the LHC blowing up the world (but even for the latter my credence is plausibly dominated by model uncertainty rather than my credence in this happening conditional on my model of physics being correct).
  • Longtermism goes through (i.e. it looks like we can have most impact by focusing on the long-term) on much less extreme time scales than 1 billion.
    • Some such less extreme time scales have "more defensible" reasons behind them, e.g. outside view considerations based on the survival of other species or the amount of time humanity or civilization have survived so far. The Lindy rule prior you describe is one example.
  • There is a wager for long futures: we can have much more impact if the future is long, so these scenarios might dominate our decision-making even if they are unlikely.
    • (NB I think this is a wager that is unproblematic only if we have independently established that the probability of the relevant futures isn't vanishingly small. This is because of the standard problems around Pascal's wager.)

That all being said, my views on this feel reasonably but not super resilient - like it's "only" 10% I'll have changed my mind about this in major ways in 2 years. I also think there is room for more work on how to best think about such questions (the Ord et al. paper is a great example), e.g. checking that this kind of reasoning doesn't "prove too much" or leads to absurd conclusions when applied to other cases.

Comment by max_daniel on A case against strong longtermism · 2020-12-27T20:49:12.188Z · EA · GW

Honest question being new to EA...  is it not problematic to restrict our attention to possible futures or aspects of futures which are relevant to a single issue at a time?   Shouldn't we calculate Expected Utility over billion year futures for all  current interventions, and set our relative propensity for actions = exp{α * EU } / normalizer ?

Yes, I agree that it's problematic. We "should" do the full calculation if we could, but in fact we can't because of our limited capacity for computation/thinking.

But note that in principle this situation is familiar. E.g. a CEO might try to maximize the long-run profits of her company, or a member of government might try to design a healthcare policy that maximizes wellbeing. In none of these cases are we able to do the "full calculation", albeit my a less dramatic margin than for longtermism. 

And we don't think that the CEO's or the politician's effort are meaningless or doomed or anything like that. We know that they'll use heuristics, simplified models, or other computational shortcuts; we might disagree with them which heuristics and models to use, and if repeatedly queried with "why?" both they and we would come to a place where we'd struggle to justify some judgment call or choice of prior or whatever. But that's life - a familiar situation and one we can't get out of.

Comment by max_daniel on A case against strong longtermism · 2020-12-27T20:35:41.626Z · EA · GW

Hi brekels, I think these are fair points. In particular, I think we may be able to agree on the following statement as well as more precise versions of it:

We may not ever be able to align our priors and sufficiently agree on the future, but for the purposes of planning and allocating resources, the discussion around climate change seems significantly more grounded [than the one about e.g. AI safety].

In my view, the key point is that, say, climate change and AI safety differ in degree but not in kind regarding whether we can make probabilistic predictions, should take action now, etc.

In particular, consider the following similarities:

  • I agree that for climate change we utilize extrapolations of current trends such as "if  current data projected forward with no notable intervention, the Earth would be uninhabitable in x years." - But in principle we can do the same for AI safety, e.g. "if Moore's Law continued, we could buy a brain-equivalent of compute for $X in Y years."
    • Yes, it's not straightforward to say what a "brain-equivalent of compute" is, or why this matters. But neither is it straightforward to e.g. determine when the Earth becomes "uninhabitable". (Again, I might concede that the latter notion is in some sense easier to define - my point it just that I don't see a qualitative difference.)
  • You say we haven't yet observed human-level AI. But neither have we observed (at least not directly an on a planetary scale), say, +6 degrees of global warming  compared to pre-industrial times. Yes, we have observed anthropogenic climate change, but we've also observed AI systems developed by humans including specific failure modes (e.g. misspecified rewards, biased training data, or lack of desired generalization in response to distributional shift).
    • In various ways it sounds right to me that we have "more data" on climate change, or that the problem of more severe climate change is "more similar" to current climate change than the problem of misaligned transformative AI is to current AI failure modes. But again, to me this seems like "merely" a difference in degree.

Separately, I think that if we try hard to find the most effective intervention to avoid some distant harm (say, one we think would occur in the year 2100, or even 2050), we will have to confront the "less well-defined" and "more uncertain" aspects of the future anyway, no matter whether the harm we're considering has some relatively well-understood core (such as climate change). 

This is because, whether we like it or not, these less well-defined issues such as the future of technology, governance, economic and political systems, etc., as well as interactions with other, less predictable, issues (e.g. migration, war, inequality, ...) will make a massive difference to how some apparently predictable harm will in fact affect different people, how we in fact might be able to prevent or respond to it etc.

E.g. it's not that much use if I can predict how much warming we'd get by 2100 conditional on a certain amount of emissions (though note that even in this seemingly "well-defined" case a lot hinges on which prior over climate sensitivity we use, since that has a large affect on the a posteriori probability of bad tail scenarios - and how to determine that prior isn't something we can just "read off" from any current observation) if I don't know for even the year 2050 the state of nuclear fusion, carbon capture and storage, geoengineering, solar cell efficiency, batteries, US-China relations, or whether in the meantime a misaligned AI system killed everyone.

It seems to me that the alternative, i.e. planning based on just those aspects of the future that seem "well-defined" or "predictable", leads to things like the Population Bomb or Limits to Growth, i.e. things that have a pretty bad track record.

Comment by max_daniel on [Feedback Request] The compound interest of saving lives · 2020-12-22T15:22:14.637Z · EA · GW

Hey Max, I think this is a valid and important line of thought. As you suspect, the basic idea has been discussed, though usually not with a focus on exactly the uncertainties you list.

I'm afraid I don't have time to respond to your questions directly, but here are a couple of links that might be interesting:

Comment by max_daniel on A case against strong longtermism · 2020-12-22T12:42:57.939Z · EA · GW

Popper's ideas seem to have interesting overlap with MIRI's work. 

Yeah, I was also vaguely reminded of e.g. logical induction when I read the summary of Popper's argument in the text Vaden linked elsewhere in this discussion.

Comment by max_daniel on A case against strong longtermism · 2020-12-21T11:43:04.700Z · EA · GW

Regarding Popper's claim that it's impossible to "predict historical developments to the extent to which they may be influenced by the growth of our knowledge":

I can see how there might be a certain technical sense in which this is true, though I'm not sufficiently familiar with Popper's formal arguments to comment in detail.

However, I don't think the claim can be true in the everyday sense (rather than just for a certain technical sense of "predicting") that arguably is relevant when making plans for the future.

For example, consider climate change. It seems clear that between now and, say, 2100 our knowledge will grow in various ways that are relevant: we'll better understand the climate system, but perhaps even more crucially we'll know more about the social and economic aspects (e.g. how people will to adapt to a warmer climate, how much emission reductions countries will pursue, ...) and on how much progress we've made with developing various relevant technologies (e.g. renewable energy, batteries, carbon capture and storage, geoengineering, ...). 

The latter two seem like paradigm examples of things that would be "impossible to predict" in Popper's sense. But does it follow that regarding climate change we should throw our hands up in the air and do nothing because it's "impossible to predict the future"? Or that climate change policy faces some deep technical challenge?

Maybe all we are doing when choosing between climate change policies in Popper's terms is "predicting that certain developments will take place under certain conditions" rather than "predicting historical developments" simpliciter. But as I said, then this to me just suggests that as longtermists we will be just fine using "predictions of certain developments under certain conditions".

I find it hard to see why there would be a qualitative difference between longtermism (as a practical project) and climate change mitigation which implies that the former is infeasible while the latter is a worthwhile endeavor.

Comment by max_daniel on A case against strong longtermism · 2020-12-21T11:32:03.417Z · EA · GW

The proof [for the impossibility of certain kinds of long-term prediction] is here: https://vmasrani.github.io/assets/pdf/poverty_historicism_quote.pdf

Note that in that text Popper says:

The argument does not, of course, refute the possibility of every kind of social prediction; on the contrary, it is perfectly compatible with the possibility of testing social theories - for example economic theories - by way of predicting that certain developments will take place under certain conditions. It only refutes the possibility of predicting historical developments to the extent to which they may be influenced by the growth of our knowledge.

And that he rejects only

the possibility of a theoretical history; that is to say, of a historical social science that would correspond to theoretical physics.

My guess is that everyone in this discussion (including MacAskill and Greaves) agree with this, at least as claims about what's currently possible in practice. On the other hand, it seems uncontroversial that some form of long-run predictions are possible (e.g. above you've conceded they're possible for some astronomical systems).

Thus it seems to me that the key question is whether longtermism requires the kind of predictions that aren't feasible - or whether longtermism is viable with the sort of predictions we can currently make. And like Flodorner I don't think that mathematical or logical arguments will be much help with that question.

Why can't we be longtermists while being content to "predict that certain developments will take place under certain conditions"?

Comment by max_daniel on A case against strong longtermism · 2020-12-21T11:13:50.796Z · EA · GW

In this example [coin flip] you aren't predicting future knowledge, you're predicting that you'll have knowledge in the future - that is, in one minute, you will know the outcome of the coin flip.

If we're giving a specific probability distribution for the outcome of the coin flip, it seems like we're doing more than that: 

Consider that we would predict to know the outcome of the coin flip in one minute no matter what we think the odds of heads are.

Therefore, if we do give specific odds (such as 50%), we're doing more than just saying we'll know the outcome in the future.

Comment by max_daniel on A case against strong longtermism · 2020-12-20T16:20:43.764Z · EA · GW

(I was also confused by this, and wrote a couple of comments in response. I actually think they don't add much to the overall discussion, especially now that Vaden has clarified below what kind of argument they were trying to make. But maybe you're interested given we've had similar initial confusions.)

Comment by max_daniel on A case against strong longtermism · 2020-12-19T13:37:21.464Z · EA · GW

As even more of an aside, type 1 arguments would also be vulnerable to a variant of Owen's objection that they "prove too little".

However, rather than the argument depending too much on contingent properties of the world (e.g. whether it's spatially infinite), the issue here is that they would depend on the axiomatization of mathematics.

The situation is roughly as follows: There are two different axiomatizations of mathematics with the following properties: 

  • In both of them all maths that any of us are likely to ever "use in practice" works basically the same way.
  • For parallel situations (i.e. assignments of measure to some subsets of some set, which we'd like to extend to a measure on all subsets) there are immeasurable subsets in exactly one of the axiomatizations.

Specifically, for example, for our intuitive notion of "length" there are immeasurable subsets of the real numbers in the standard axiomatization of mathematics (called ZFC here). However, if we omit a single axiom - the axiom of choice - and replace it with an axiom that loosely says that there are weirdly large sets then every subset of the real numbers is measurable. [ETA: Actually it's a bit more complicated, but I don't think in a way that matters here. It doesn't follow directly from these other axioms that everything is measurable, but using these axioms it's possible to construct a "model of mathematics" in which that holds. Even less importantly, we don't totally omit the axiom of choice but replace it with a weaker version.]

I think it would be pretty strange if the viability of longtermism depended on such considerations. E.g. imagine writing a letter to people in 1 million years explaining why you didn't choose to try to help more rather than fewer of them. Or imagine getting such a letter from the distant past. I think I'd be pretty annoyed if I read "we considered helping you, but then we couldn't decide between the axiom of choice and inaccessible cardinals ...".

Comment by max_daniel on A case against strong longtermism · 2020-12-19T13:18:26.119Z · EA · GW

Technical comments on type-2 arguments (i.e. those that aim to show there is no, or no non-arbitrary way for us to identify a particular probability measure.) [Refer to the parent comment for the distinction between type 1 and type 2 arguments.]

I think this is closer to the argument Vaden was aiming to make despite the somewhat nonstandard use of "measurable" (cf. my comment on type 1 arguments for what measurable vs. immeasurable usually refers to in maths), largely because of this part (emphasis mine) [ETA: Vaden also confirms this in this comment, which I hadn't seen before writing my comments]:

But don’t we apply probabilities to infinite sets all the time? Yes - to measurable sets. A measure provides a unique method of relating proportions of infinite sets to parts of itself, and this non-arbitrariness is what gives meaning to the notion of probability. While the interval between 0 and 1 has infinitely many real numbers, we know how these relate to each other, and to the real numbers between 1 and 2.

Some comments:

  • Yes, we need to be more careful when reasoning about infinite sets since some of our intuitions only apply to finite sets. Vaden's ball reshuffling example and the "Hilbert's hotel" thought experiment they mention are two good examples for this.
  • However, the ball example only shows that one way of specifying a measure no longer works for infinite sample spaces: we can no longer get a measure by counting how many instances a subset (think "event") consists of and dividing this by the number of all possible samples because doing so might amount to dividing infinity by infinity.
    • (We can still get a measure by simply setting the measure of any infinite subset to infinity, which is permitted for general measures, and treating something finite divided by infinity as 0. However, that way the full infinite sample space has measure infinity rather than 1, and thus we can't interpret this measure as probability.)
    • But this need not be problematic. There are a lot of other ways for specifying measures, for both finite and infinite sets. In particular, we don't have to rely on some 'mathematical structure' on the set we're considering (as in the examples of real numbers that Vaden is giving) or other a priori considerations; when using probabilities for practical purposes, our reasons for using a particular measure will often be tied to empirical information.
      • For example, suppose I have a coin in my pocket, and I have empirical reasons (perhaps based on past observations, or perhaps I've seen how the coin was made) to think that a flip of that coin results in heads with probability 60% and tails with probability 40%. When reasoning about this formally, I might write down {H, T} as sample space, the set of all subsets as -algebra, and the unique measure  with .
        • But this is not because there is any general sense in which the set  is more "measurable" than the set of all sequences of black or white balls. Without additional (e.g. empirical) context, there is no non-arbitrary way to specify a measure on either set. And with suitable context, there will often be a 'natural' or 'unique' measure for either because the arbitrariness is defeated by the context.
      • This works just as well when I have no "objective" empirical data. I might simply have a gut feeling that the probability of heads is 60%, and be willing to e.g. accept bets corresponding to that belief. Someone might think that that's foolish if I don't have any objective data and thus bet against me. But it would be a pretty strange objection to say that me giving a probability of 60% is meaningless, or that I'm somehow not able or not allowed to enter such bets.
      • This works just as well for infinite sample spaces. For example, I might have a single radioactive atom in front of me, and ask myself when it will decay. For instance, I might want to know the probability that this atom will decay within the next 10 minutes. I won't be deterred by the observation that I can't get this probability by counting the number of "points in time" in the next 10 minutes and divide them by the total number of points in time. (Nor should I use 'length' as derived from the structure of the real numbers, and divide 10 by infinity to conclude that the probability is zero.) I will use an exponential distribution - a probability distribution on the real numbers which, in this context, is non-arbitrary: I have good reasons to use it and not some other distribution.
        • Note that even if we could get the probability by counting it would be the wrong one because the probability that the atom decays isn't uniform. Similarly, if I have reasons to think that my coin is biased, I shouldn't calculate probabilities by naive counting using the set . Overall, I struggle to see how the availability of a counting measure is important to the question whether we can identify a "natural" or "unique" measure.
    • More generally, we manage to identify particular probability measures to use on both finite and infinite sample spaces all the time, basically any time we use statistics for real-world applications. And this is not because we're dealing with particularly "measurable" or otherwise mathematically special sample spaces, and despite the fact that there are lots of possible probability measures that we could use.
      • Again, I do think there may be interesting questions here: How do we manage to do this? But again, I think these are questions for psychology or philosophy that don't have to do with the cardinality or measurability of sets.
    • Similarly, I think that looking at statistical practice suggests that your challenge of "can you write down the measure space?" is a distraction rather than pointing to a substantial problem. In practice we often treat particular probability distributions as fundamental (e.g. we're assuming that something is normally distributed with certain parameters) without "looking under the hood" at the set-theoretic description of random variables. For any given application where we want to use a particular distribution, there are arbitrarily many ways to write down a measure space and a random variable having that distribution; but usually we only care about the distribution and not these more fundamental details, and so aren't worried by any "non-uniqueness" problem.

The most viable anti-longtermist argument I could see in the vicinity would be roughly as follows:

  • Argue that there is some relevant contingent (rather than e.g. mathematical) difference between longtermist and garden-variety cases.
    • Probably one would try to appeal to something like the longtermist cases being more "complex" relative to our reasoning and computational capabilities.
    • One could also try an "argument from disagreement": perhaps our use of probabilities when e.g. forecasting the number of guests to my Christmas party is justified simply by the fact that ~everyone agrees how to do this. By contrast, in longtermist cases, maybe we can't get such agreement.
  • Argue that this difference makes a difference for whether we're justified to use subjective probabilities or expected values, or whatever the target of the criticism is supposed to be.

But crucially, I think mathematical features of the objects we're dealing with when talking about common practices in a formal language are not where we can hope to find support for such an argument. This is because the longtermist and garden-variety cases don't actually differ relevantly regarding these features.

Instead, I think the part we'd need to understand is not why there might be a challenge, but how and why in garden-variety cases we're able to overcome that challenge. Only then can we assess whether these - or other - "methods" are also available to the longtermist.

Comment by max_daniel on A case against strong longtermism · 2020-12-18T19:26:56.879Z · EA · GW

Technical comments on type-1 arguments (those aiming to show there can be no probability measure). [Refer to the parent comment for the distinction between type 1 and type 2 arguments.]

I basically don't see how such an argument could work. Apologies if that's totally clear to you and you were just trying to make a type-2 argument. However, I worry that some readers might come away with the impression that there is a viable argument of type 1 since Vaden and you mention issues of measurability and infinite cardinality. These relate to actual mathematical results showing that for certain sets, measures with certain properties can't exist at all.

However, I don't think this is relevant to the case you describe. And I also don't think it can be salvaged for an argument against longtermism. 

First, in what sense can sets be "immeasurable"? The issue can arise in the following situation. Suppose we have some set (in this context "sample space" - think of the elements at all possible instances of things that can happen at the most fine-grained level), and some measure  (in this context "probability" - but it could also refer to something we'd intuitively call length or volume) we would like to assign to some subsets (the subsets in this context are "events" - e.g. the event that Santa Clause enters my room now is represented by the subset containing all instances with that property).

In this situation, it can happen that there is no way to extend this measure to all subsets. 

The classic example here is the real line as base set. We would like a measure that assigns measure || to each interval  (the set of real numbers from  to ), thus corresponding to our intuitive notion of length. E.g. the interval  should have length 4.

However, it turns out that there is no measure that assigns each interval its length and 'works' for all subsets of the real numbers. I.e. each way of extending the assignment to all subsets of the real line would violate one of the properties we want measures to have (e.g. the measure of an at most countable disjoint union of sets should be the sum of the measures of the individual sets).

Thus we have to limit ourselves to assigning a measure to only some subsets. (In technical terms: we have to use a -algebra that's strictly smaller than the full set of all subsets.) In other words, there are some subsets the measure of which we have to leave undefined. Those are immeasurable sets.

Second, why don't I think this will be a problem in this context?

  • At the highest level, note that even if we are in a context with immeasurable sets this does not mean that we get no (probability) measure at all. It just means that the measure won't "work" for all subsets/events. So for this to be an objection to longtermism, we would need a further argument for why specific events we care about are immeasurable - or in other words, why we can't simply limit ourselves to the set of measurable events.
    • Note that immeasurable sets, to the extent that we can describe them concretely at all, are usually highly 'weird'. If you try to google for pictures of standard examples like Vitali sets you won't find a single one because we essentially can't visualize them. Indeed, by design every set that we can construct from intervals by countably many standard operations like intersections and unions is measurable. So at least in the case of the real numbers, we arguably won't encounter immeasurable sets "in practice".
    • Note also that the phenomenon of immeasurable sets enables a number of counterintuitive results, such as the Banach-Tarski theorem. Loosely speaking this theorem says we can cut up a ball into pieces, and then by moving around those pieces and reassembling them get a ball that has twice the volume of the original ball; so for example "a pea can be chopped up and reassembled into the Sun".
      • But usually the conclusion we draw from this is not that it's meaningless to use numbers to refer to the coordinates of objects in space, or that our notion of volume is meaningless and that "we cannot measure the volume of objects" (and to the extent there is a problem it doesn't exclusively apply to particularly large objects - just as any problem relevant to predicting the future wouldn't specifically apply to longtermism). At most, we might wonder whether our model of space as continuous in real-number coordinates "breaks down" in certain edge cases, but we don't think that this invalidates pragmatic uses of this model that never use its full power (in terms of logical implications).
  • Immeasurable subsets are a phenomenon intimately tied to uncountable sets - i.e. ones that are even "larger" than the natural numbers (for instance, the real numbers are uncountable, but the rational numbers are not). This is roughly because the relevant concepts like -algebras and measures are defined in terms of countably many operations like unions or sums; and if you "fix" the measure of some sets in a way that's consistent at all, then you can uniquely extend this to all sets you can get from those by taking complements and countable intersections and unions. In particular, if in a countable set you fix the measure of all singleton sets containing just one element, then this defines a unique measure on the set of all subsets.
    • Your examples of possible futures where people shout different natural numbers involve only countable sets. So it's hard to see how we'd get any problem with immeasurable sets there.
      • You might be tempted to modify the example to argue that the set of possible futures is uncountably infinite because it contains people shouting all real numbers. However, (i) it's not clear if it's possible for people to shout any real number, (ii) even if it is then all my other remarks still apply, so I think this wouldn't be a problem, certainly none specific to longtermism.
        • Regarding (i), the problem is that there is no general way to refer to an arbitrary real number within a finite window of time. In particular, I cannot "shout" an infinite and non-period decimal expansion; nor can I "shout" a sequence of rational numbers that converges to the real number I want to refer to (except maybe in a few cases where the sequence is a closed-form function of n).
          • More generally, if utterances are individuated by the finite sequence of words I'm using, then (assuming a finite alphabet) there are only countably many possible utterances I can make. If that's right then I cannot refer to an arbitrary real number precisely because there are "too many" of them.
      • Similarly, the set of all sequences of black or white balls is uncountable, but it's unclear whether we should think that it's contained in the set of all possible futures.
    • More importantly: if there were serious problems due to immeasurable sets - whether with longtermism or elsewhere - we could retreat to reasoning about a countable subset. For instance, if I'm worried that predicting the development of transformative AI is problematic because "time from now" is measured in real numbers, I could simply limit myself to only reasoning about rational numbers of (e.g.) seconds from now.
      • There may be legitimate arguments for this response being 'ad hoc' or otherwise problematic. (E.g. perhaps I would want to use properties of rational numbers that can only be proven by using real numbers "within the proof".) But especially given the large practical utility of reasoning about e.g. volumes of space or probabilities of future events, I think it at least shows that immeasurability can't ground a decisive knock-down argument.
Comment by max_daniel on A case against strong longtermism · 2020-12-18T19:20:58.135Z · EA · GW

The "immeasurability" of the future that Vaden has highlighted has nothing to do with the literal finiteness of the timeline of the universe. It has to do, rather, with the set of all possible futures (which is provably infinite). This set is immeasurable in the mathematical sense of lacking sufficient structure to be operated upon with a well-defined probability measure. Let me turn the question around on you: Suppose we knew that the time-horizon of the universe was finite, can you write out the sample space, $\sigma$-algebra, and measure which allows us to compute over possible futures?  

I can see two possible types of arguments here, which are importantly different.

  1. Arguments aiming to show that there can be no probability measure - or at least no "non-trivial" one - on some relevant set such as the set of all possible futures.
  2. Arguments aiming to show that, among the many probability measures that can be defined on some relevant set, there is no, or no non-arbitrary way to identify a particular one.

[ETA: In this comment, which I hadn't seen before writing mine, Vaden seems to confirm that they were trying to make an argument of the second rather than the first kind.]

In this comment I'll explain why I think both types of arguments would prove too much and thus are non-starters. In other comments I'll make some more technical points about type 1 and type 2 arguments, respectively.

(I split my points between comments so the discussion can be organized better and people can use up-/downvotes in a more fine-grined way)

I'm doing this largely because I'm worried that to some readers the technical language in Vaden's post and your comment will suggest that longtermism specifically faces some deep challenges that are rooted in advanced mathematics. But in fact I think that characterization would be seriously mistaken (at least regarding the issues you point to). Instead, I think that the challenges either have little to do with the technical results you mention or that the challenges are technical but not specific to longtermism. 

[After writing I realized that the below has a lot of overlap with what Owen and Elliot have written earlier. I'm still posting it because there are slight differences and there is no harm in doing so, but people who read the previous discussions may not want to read this.]

Both types of arguments prove too much because they (at least based on the justifications you've given in the post and discussion here) are not specific to longtermism at all. They would e.g. imply that I can't have a probability distribution over how many guests will come to my Christmas party tomorrow, which is absurd.

To see this, note that everything you say would apply in a world that ends in two weeks, or to deliberations that ignore any effects after that time. In particular, it is still true that the set of these possible 'short futures' is infinite (my house mate could enter the room any minute and shout any natural number), and that the possible futures contains things that, like your example of a sequence of black and white balls, have no unique 'natural' structure or measure (e.g. the collection of atoms in a certain part of my table, or the types of possible items on that table).

So these arguments seem to show that we can never meaningfully talk about the probability of any future event, whether it happens in a minute or in a trillion years. Clearly, this is absurd.

Now, there is a defence against this argument, but I think this defence is just as available to the longtermist as it is to (e.g.) me when thinking about the number of guests at my Christmas party next week. 

This defence is that for any instance of probabilistic reasoning about the future we can simply ignore most possible futures, and in fact only need to reason over specific properties of the future. For instance, when thinking about the number of guests to my Christmas party, I can ignore people shouting natural numbers or the collection of objects on my table - I don't need to reason about anything close to a complete or "low-level" (e.g. in terms of physics) description of the future. All I care about is a single natural number - the number of guests - and each number corresponds to a huge set of futures at the level of physics.

But this works for many if not all longtermist cases as well! The number of people in one trillions years is a natural number, as is the year in which transformative AI is being developed, etc. Whether or not identifying the relevant properties, or the probability measure we're adopting, is harder than for typical short-term cases - and maybe prohibitively hard - is an interesting and important question. But it's an empirical question, not one we should expect to answer by appealing to mathematical considerations around the cardinality or measurability of certain sets.

Separately, there may be an interesting question about how I'm able to identify the high-level properties I'm reasoning about - whether that high-level property is the number of people coming to my party or the number of people living in a trillion years. How do I know I "should pay attention" only to the number of party guests and not which natural numbers they may be shouting? And how am I able to "bridge" between more low-level descriptions of futures (e.g. a list of specific people coming to the party, or a video of the party, or even a set of initial conditions plus laws of motion for all relevant elementary particles)? There may be interesting questions here, but I think these are questions for philosophy or psychology who in my view aren't particularly illuminated by referring to concepts from measure theory. (And again, they aren't specific to longtermism.)

Comment by max_daniel on "Patient vs urgent longtermism" has little direct bearing on giving now vs later · 2020-12-15T17:11:50.797Z · EA · GW

I thought about this for another minute, and realized one thing that hadn't been salient to me previously. (Though quite possibly it was clear to you, as the point is extremely basic. - It also doesn't directly answer the question about whether we should expect stock returns to exceed GDP growth indefinitely.)

When thinking about whether X can earn returns that exceed economic growth, a key question is what share of those returns is reinvested into X. For example, suppose I now buy stocks that have fantastic returns, but I spend all those returns to buy chocolate. Then those stocks won't make up an increasing share of my wealth. This would only happen if I used the returns to buy more stocks, and they kept earning higher returns than other stuff I own.

In particular, the simple argument that returns can't exceed GDP growth forever only follows if returns are reinvested and 'producing' more of X doesn't have too steeply diminishing returns.

For example, two basic 'accounting identities' from macroeconomics are:

Here,  is the savings rate (i.e. fraction of total income that is saved, which in equilibrium equals investments into capital),  is the rate of economic growth, and  is the rate of return on capital. These equations are essentially definitions, but it's easy to see that (in a simple macroeconomic model with one final good, two factors of production, etc.)  can be viewed as the capital-to-income ratio and  as capital's share of income.

Note that from equations 1 and 2 it follows that . Thus we see that r exceeds g in equilibrium/'forever' if and only if  - in other words, if and only if (on average across the whole economy) not all of the returns from capital are re-invested into capital.

(Why would that ever happen? Because individual actors maximize their own welfare, not aggregate growth. So e.g. they might prefer to spend some share of capital returns on consumption.)

Analog remarks apply to other situations where a basic model of this type is applicable.

Comment by max_daniel on "Patient vs urgent longtermism" has little direct bearing on giving now vs later · 2020-12-13T22:38:07.787Z · EA · GW

Hmm, my understanding is that the equity premium is the difference between equity returns and bond (treasury bill) returns.

Yes, that's my understanding as well.

Does that tell us about the difference between equity returns and GDP growth?

I don't know, my sense is not directly but I could be wrong. I think I was gesturing at this because I took it as evidence that we don't understand why equities have such high return. (But then it is an additional contingent fact that these returns don't just exceed bond returns but also GDP growth.)

A priori, would you expect both equities and treasuries to have returns that match GDP growth?

I don't think I'd expect this, at least not with high confidence - but overall I just feel like I don't know how to think about this because I understand too little finance and economics. (In particular, it's plausible to me that there are strong a priori arguments about the relationships between GDP growth, bond returns, and equity returns - I just don't know what they are.)

Comment by max_daniel on "Patient vs urgent longtermism" has little direct bearing on giving now vs later · 2020-12-13T16:31:06.268Z · EA · GW

[Low confidence as I don't really understand anything about finance.]

It sounds right to me that the stock market can't grow more quickly than GDP forever. However, it seems like it has been doing so for decades, and that there is no indication that this will stop very soon - say, within 10 years.

(My superficial impression is that this phenomenon it somewhat surprising a priori, but that there isn't really a consensus for what explains it.)

Therefore, in particular, for the window of time made available by moving spending from now to, say, in 1 year, it seems you can earn returns on the stock market that exceed world economic growth.

If we know that this can't continue forever, it seems to me this would be more relevant for the part where I say "future longtermists would invest in the stock market rather than engaging in 'average activities' that earn average returns"  etc. 

More precisely, the key question we need to ask about any longtermist investment-like spending opportunity seems to be: After the finite window of above-average growth from that opportunities, will there still be other opportunities that, from a longtermist perspective, have returns that exceed average economic growth? If yes, then it is important whether the distant returns from investment-like longtermist spending end up with longtermists; if no, then it's not important.

Comment by max_daniel on Careers Questions Open Thread · 2020-12-10T11:33:42.910Z · EA · GW

Hey, great that you're thinking about this at this stage.

I hope that people with more experience in e.g. AI risk work will chime in, but here are a few quick thoughts from someone who did a bachelor's and master's in maths, has done research related to existential risk, and now does project management for an organization doing such research.

  • I think either of maths, physics, or computer science can in principle be very solid degree choices. I could easily see it being the case that the decisive factor for you could be which you feel most interested in right now, or which universities you can get into for these different disciplines.
  • Picking up the last point, I think the choice of university could easily be more important than the choice of subject. You say you want to stay near Zurich, but perhaps there are different universities you could reach from there (e.g. I think Zurich itself has at least two?). On the other hand, don't sweat it. I think that especially in quantitative subjects and at the undergraduate level, university prestige is less important, and at least in the German-speaking area there aren't actually huge differences in the quality of education that are correlated with university prestige.
    • However, this will still be a significant factor, especially in some careers.
  • Similarly, what you do within your degree can easily be more important than its subject. I.e. which courses do you take, which topic do you write your thesis in, etc. In particular, if you're interested in AI risk, there is a lot of advice available on what to prioritize within your degree (see e.g. here + links therein).
  • Finally, what you do outside of your degree can easily be more important than its subject. For example, deep learning - an area highly relevant to AI risk - has very low barriers to entry compared to most areas of maths or physics. It could be good to find out early if you're interested in and good at machine learning, for instance by taking online courses such as this one or working through OpenAI's "Spinning Up" materials. This stuff really doesn't require much prior knowledge; it could be accessible to you even know, or else after the first 1-2 years of study.
  • I don't think that research on AI risk requires a degree in computer science. There are many mathematicians and physicists doing technical AI safety research, and more broadly for reducing AI risk we'll need social scientists, law scholars, policy advisors, and generally a multitude of people with a variety of expertise.
    • Yes, much (though not all) research on technical AI safety involves machine learning. For this, you'll need programming and software engineering skills, and you learn those in a computer science degree. However, you can also learn them in other degrees, or even fairly easily pick them up on the side. In addition, even the programming aspect of machine learning is overall importantly different from traditional programming.
    • On the other hand, for basically all technical AI safety research you'll need maths. You may be able to learn this better in a maths or physics than a CS degree; generally it's easier to move from a more abstract and theoretical background to more applied work than the other way around, so maths may leave most options open.
    • Yes, in maths or physics you'll learn many things that aren't very relevant to the AI risk work you may end up doing, but that's true for computer science as well. E.g. there are probably at most a few niches for how to apply courses on computability theory or databases (both typical CS subjects) to AI. On the other hand, I struggled to think of areas of physics that are clearly totally irrelevant to all AI risk work!
  • Some of the above points apply less if you can access undergraduate degrees that focus specifically on e.g. machine learning. But even then, note that it's very possible to move later from maths or physics to machine learning, but harder the other way around.
  • I've talked a lot about AI risk because you mentioned it, but I wouldn't narrow down on AI risk too quickly. Quantitative degrees leave open a lot of options including, for instance, global priorities research or some of the less explored paths mentioned here and here. Examples of mathematicians who've later done great EA-relevant work that's neither in academic mathematics nor AI risk include Owen Cotton-Barratt and David Roodman.
  • Talking to students doing the degrees you're considering at the universities you're considering can be a good source of information about how the degree is actually like and similar things.
Comment by max_daniel on "Patient vs urgent longtermism" has little direct bearing on giving now vs later · 2020-12-09T23:42:33.537Z · EA · GW

Yes, though to be fair financial investments (and generally everything that won't have most of its total effect soon) need to hit the same narrow target. But perhaps for those the mechanisms by which we might miss the target have been more prominent in recent discussions (value drift, expropriation, etc.).

Comment by max_daniel on "Patient vs urgent longtermism" has little direct bearing on giving now vs later · 2020-12-09T22:46:34.573Z · EA · GW

When reading this, I was initially confused by a potential objection you don't explicitly address. So I thought I'd quickly write up my thoughts in case others have a similar reaction. My guess is you in principle agree with all this (and I think you've in fact hinted at it in several places), and that it's one of the reasons why you say it's ultimately a messy empirical question.

I think my original objection is mostly flawed, but that it does point to some complications that mean that one needs to be a little more careful when deciding which spending on longtermism is 'actually investment'.

Here it is:

Objection. 

  • The longtermism community can enjoy above-average growth for only a finite window of time. (It can at most control all resources, after which its growth equals average growth.)
  • Thus, spending money on growing the longtermism community now rather than later merely moves a transient window of additional resource growth to an earlier point in time.
  • We should be indifferent about the timing of benefits, so this effect doesn't matter. On the other hand, by waiting one year we can earn positive expected returns by (e.g.) investing into the stock market.
  • To sum up, giving later rather than now has two effects: (1) moving a fixed window of additional growth around in time and (2) leaving us more time to earn positive financial returns. The first effect is neutral, the second is positive. Thus, overall, giving later is always better.

This objection is actually based on an objection to an analogous argument about shorttermist spending (which I was recently reminded of by old posts, one by Paul Christiano and one by you and Ben Todd): 

Just as "longtermism has outperformed the stock market", so have, for instance, recipients of GiveDirectly's unconditional cash transfers. They buy things like iron roofs, and this has much higher investment returns than the stock market

More broadly, you might think that most shorttermist spending is 'actually investment' as well: as Christiano puts it, "When I support the world’s poorest people I’m not just alleviating their suffering, I’m increasing the productivity of their lives. The recipients of aid go on to contribute to the world, and their contributions compound in turn." 

Why doesn't this show that shorttermist spending is better done now rather than later? 

Because of an objection parallel to the above!

The spending will "diffuse throughout the economy" and its investment returns will over time approach the average growth rate. (Again, one way to see this is that, e.g., the poor Ugandan household receiving a cash transfer can't have above-average growth forever: else it would eventually control the whole world economy!)

So by giving now rather than later you only move a fixed window of transient above-average returns to an earlier point in time. After that window, your spending will earn average returns (i.e. ~growth of gross world product). But if you delay the start of this whole process, you gain time in which you can earn above-average returns by e.g. investing into the stock market.

Thus, unless some assumptions of this simple model are wrong or there are other considerations, it's better to give later!

---

Rebuttal of the objection

To be clear, I think the analog objection for to the analog argument for typical examples of shorttermist spending such as donations to GiveDirectly does work. (The objection doesn't settle the debate because there are other considerations; but it does rebut the original argument.)

What is different about the longtermist case?

The key question is whether, after the finite window of above-average growth, the resources "descended" from your spending will still be "controlled by" your goals.

This is the case for a longtermist that now spends money that increases the number of longtermists in the future. In particular, once it's no longer possible to get outsized investment returns from growing longtermism, future longtermists would invest in the stock market rather than engaging in 'average activities' that earn average returns. (Or more likely they'd pursue some other 'investment-like' spending activity that has lower returns than growing longtermism now but higher returns than the stock market then.) So you effectively don't "lose" the opportunity to earn above-average returns on the stock market by first investing into longtermism.

By contrast, the shorttermist giving to GiveDirectly should expect the resources "descended" from her giving to eventually end up with the "average market subject": the cash transfer recipient buys an iron roof, the iron roof vendor buys new tools, the tool manufacturer pays taxes, etc. Thus the resources will be held by average people who engage in average activities that earn average returns - rather than investing everything into the stock market, or pursuing some other activity selected for maximizing the shorttermist's values (e.g. perhaps after finite window of above-average returns from the cash transfer it's still the case that, by the shorttermist's lights, you could get "investment returns" from buying bednets that exceed stock market returns - but the "average person" won't donate to AMF either). So unlike the longtermist, by donating to GiveDirectly now, the shorttermist does "lose" the opportunity to earn above-average returns on e.g. the stock market.

(On the other hand, the objection won't work for shorttermist spending that's mostly "meta". For instance, donations to animal welfare organizations also pay for "research or career development or book-writing or websites or community-building", or even just additional vegans that convince others of veganism without additional effort, etc.)

I think this shows that the availability of stock-market-beating longtermist spending opportunities is a significantly weaker argument than it might seem at first glance.

  • First, the size of the effect is less dramatic. You won't earn higher returns forever, you just optimize the relative timing of higher-return and lower-return periods by frontloading the higher-return period.
  • Second, the target you need to hit is arguably pretty narrow. The objection only applies conclusively to things that basically create cause-agnostic, transferable resources that are allocated at least as well as if allocated by your future self. If resources are tied to a particular cause area, are not transferable, or are more poorly allocated, they count less. For example, all of the following arguably don't count as 'actually investment' (at least not fully), and instead are much more similar to the shorttermist donating now to GiveDirectly:
    • Growing the number of AI safety researchers more quickly than stock market investments, if most of these AI safety researchers wouldn't be willing or able to switch their careers to, say, climate change mitigation if it turned out that was much higher impact.
    • Growing the number of longtermism-relevant research results more quickly than stock market investments, unless these research results generate transferable and cause-agnostic resources such as money held by longtermists.
    • Growing the number of sort-of-longtermists if they have a bias toward spending rather than investing - for instance if they wouldn't be willing to invest all their resources for potentially thousands of years or more into a super-long-term investment fund if that looked like the best option.
    • Growing the number of resources aligned with relatively common-sensical goals like "taking global challenges seriously" or "generally-sensible action", if you believe that the best longtermist spending will eventually be super weird (perhaps "tile the universe with hedonium").
    • Acquiring resources (longtermists, research results, etc.) that 'perish' more quickly than money held by yourself. For instance this would be the case if research gets "forgotten" too quickly or new longtermists have higher rates of value drifts than yourself.

(There are also a number of other ways in which the objection can fail, which admittedly at first glance seem more likely for longtermist spending than for donations to GiveDirectly: e.g. if the timing of your spending influences the length of the period of above-average returns or the future average growth rate. 

E.g. suppose it was the case that if you start to grow longtermism now, eventually 10% of the world's population will be longtermist, but if you start to grow longtermism only in  50 years then growth will max out at 5% of the world population; this would push toward spending now. On the other hand, the effect could also turn out to work the other way around and favor spending later!)

Comment by max_daniel on My mistakes on the path to impact · 2020-12-09T17:35:55.549Z · EA · GW

I agree it's possible that because of social pressures or similar things the best policy change that's viable in practice could be an indiscriminate move toward more or less epistemic deference. Though I probably have less of a strong sense that that's in fact true.

(Note that when implemented well, the "best of both worlds" policy could actually make it easier to express disagreement because it clarifies that there are two types of beliefs/credences to be kept track of separately, and that one of them has to exclude all epistemic deference.

Similarly, to the extent that people think that avoiding 'bad, unilateral action' is a key reason in favor of epistemic deference, it could actually "destigmatize" iconoclastic views if it's common knowledge that an iconoclastic pre-deference view doesn't imply unusual primarily-other-affecting actions because primarily-other-affecting actions depend on post-deference rather rather than pre-deference views.)

I agree with everything you say about $5M grants and VCs. I'm not sure if you think my mistake was mainly to consider a $5M stake a "large-scale" decision or something else, but if it's the former I'm happy to concede that this wasn't the best example to give for a decision where deference should get a lot of weight (though I think we agree that in theory it should get some weight?).

Comment by max_daniel on My mistakes on the path to impact · 2020-12-09T17:24:01.927Z · EA · GW

One disagreement I have with Max is whether someone should defer is contingent upon the importance of a decision. I think this begs the question in that it pre-assumes that deference lead to the best outcomes. 

Instead, I think you should act such that you all-things-considered-view is that you're making the best decision. I do think that for many decisions (with the possible exception of creative work), some level of deference leads to better outcomes than zero deference at all, but I don't think it's unusually true for important decisions except inasmuch as a) the benefits (and also costs!) of deference are scaled accordingly and b) more people are likely to have thought about important decisions.

I'm not sure if we have a principled disagreement here, it's possible that I just described my view badly above.

I agree that one should act such that one's all-things-considered view is that one is making the best decision (the way I understand that statement it's basically a tautology).

Then I think there are some heuristics for which features of a decision situation make it more or less likely that deferring more (or at all) leads to decisions with that property. I think on a high level I agree with you that it depends a lot "on the context of what this information is for", more so than on e.g. importance.

With my example, I was also trying to point less to importance per se but on something like how the costs and benefits are distributed between yourself and others. This is because very loosely speaking I expect not deferring to often be better if the stakes are concentrated on oneself and more deference to be better if one's own direct stake is small. I used a decision with large effects on others largely because then it's not plausible that you yourself are affected by a similar amount; but it would also apply to a decision with zero effect on yourself and a small effect on others. Conversely, it would not apply to a decision that is very important to yourself (e.g. something affecting your whole career trajectory).

Comment by max_daniel on My mistakes on the path to impact · 2020-12-09T17:09:39.784Z · EA · GW

I think I perceive less of a difference between the examples we've been discussing, but after reading your reply I'm also less sure if and where we disagree significantly. 

I read your previous claim as essentially saying "it would always be bad to include the information that some person X is skeptical about MIRI when making the decision whether to give MIRI a $5M grant, unless you understand more details about why X has this view".

I still think this view basically commits you to refusing to see information of that type in the COVID policy thought experiment. This is essentially for the reasons (i)-(iii) I listed above: I think that in practice it will be too costly to understand the views of each such person X in more detail. 

(But usually it will be worth it to do this for some people, for instance for the reason spelled out in your toy model. As I said: I do think it will often be even more valuable to understand someone's specific reasons for having a belief.)

Instead, I suspect you will need to focus on the few highest-priority cases, and in the end you'll end up with people  whose views you understand in great detail, people  where your understanding stops at other fairly high-level/top-line views (e.g. maybe you know what they think about "will AGI be developed this century?" but not much about why), and people  of whom you only know the top-line view of how much funding they'd want to give to MIRI.

(Note that I don't think this is hypothetical. My impression is that there are in fact long-standing disagreements about MIRI's work that can't be fully resolved or even broken down into very precise subclaims/cruxes, despite many people having spent probably hundreds of hours on this. For instance, in the writeups to their first grants to MIRI, Open Phil remark that "We found MIRI’s work especially difficult to evaluate", and the most recent grant amount was set by a committee that "average[s] individuals’ allocations" . See also this post by Open Phil's Daniel Dewey and comments.)

At that point, I think you're basically in a similar situation. There is no gun pointed at your head, but you still want to make a decision right now, and so you can either throw away the information about the views of person  or use it without understanding their arguments.

Furthermore, I don't think your situation with respect to person  is that different: if you take their view on "AGI this century?" into account for the decision whether to fund MIRI but have a policy of never using "bare top-level views", this would commit to to ignoring the same information in a different situation, e.g. the decision whether to place a large bet on whether AGI will be developed this century (purely because what's a top-level view in one situation will be an argument or "specific" fact in another); this seems odd.

(This is also why I'm not sure I understand the relevance of your point on hierarchical organizations. I agree that usually sub-problems will be assigned to different employees. But e.g. if I assign "AGI this century?" to one employee and "is MIRI well run?" to another employee, why am I justified in believing their conclusions on these fairly high-level questions but not justified in believing anyone's view on whether MIRI is worth funding?)

Note that thus far I'm mainly arguing against taking into account no-one's top-level views. Your most recent claim involving "the people I think are smartest" suggests that maybe you mainly object to using a lot of discretion in which particular people's top-level views to use. 

I think my reaction to this is mixed: On one hand, I certainly agree that there is a danger involved here (e.g. in fact I think that many EAs defer too much to others EAs relative to non-EA experts), and that it's impossible to assess with perfect accuracy how much weight to give to each person. On the other hand, I think it is often possible to assess this with limited but still useful accuracy, both based on subjective and hard-to-justify assessments of how good someone's judgment seemed in the past (cf. how senior politicians often work with advisors they've had a long work relationship with) and on crude objectives proxies (e.g. 'has a PhD in computer science').

On the latter, you said that specifically you object to allocating weight to someone's top-line opinion "separately from your evaluation of their finer-grained sub-claims". If that means their finer-grained sub-claims on the particular question under consideration, then I disagree for the reasons explained so far. If that means "separately from your evaluation of any finer-grained sub-claim they ever made on anything", then I agree more with this, though still think this is both common and justified in some cases (e.g. if I learn that I have rare disease A for which specialists universally recommend drug B as treatment, I'll probably happily take drug B without having ever heard of any specific sub-claim made by any disease-A specialist).

Similarly, I agree that information cascades and groupthink are dangers/downsides, but that they will sometimes be outweighed by the benefits.

Comment by max_daniel on My mistakes on the path to impact · 2020-12-08T10:28:41.649Z · EA · GW

I think we disagree. I'm not sure why you think that even for decisions with large effects one should only or mostly take into account specific facts or arguments, and am curious about your reasoning here.

I do think it will often be even more valuable to understand someone's specific reasons for having a belief. However, (i) in complex domains achieving a full understanding would be a lot of work, (ii) people usually have incomplete insight into the specific reasons for why they hold a certain belief themselves and instead might appeal to intuition, (iii) in practice you only have so much time and thus can't fully pursue all disagreements.

So yes, always stopping at "person X thinks that p" and never trying to understand why would be a poor policy. But never stopping at that seems infeasible to me, and I don't see the benefits from always throwing away the information that X believes p in situations where you don't fully understand why.

For instance, imagine I pointed a gun to your head and forced you to now choose between two COVID mitigation policies for the US for the next 6 months. I offer you to give you additional information of the type "X thinks that p" with some basic facts on X but no explanation for why they hold this belief. Would you refuse to view that information? If someone else was in that situation, would you pay for me not giving them this information? How much?

There is a somewhat different failure mode where person X's view isn't particularly informative compared to the view of other people Y, Z, etc., and so by considerung just X's view you give it undue weight. But I don't think you're talking about that?

I'm partly puzzled by your reaction because the basic phenomenon of deferring to the output of others' reasoning processes without understanding the underlying facts or arguments strikes me as not unusual at all. For example, I believe that the Earth orbits the Sun rather than the other way around. But I couldn't give you any very specific argument for this like "on the geocentric hypothesis, the path of this body across the sky would look like this". Instead, the reason for my belief is that the heliocentric worldview is scientific consensus, i.e. epistemic deference to others without understanding their reasoning.

This also happens when the view in question makes a difference in practice. For instance, as I'm sure you're aware, hierarchical organizations work (among other things) because managers don't have to recapitulate every specific argument behind the conclusions of their reports.

To sum up, a very large amount of division of epistemic labor seems like the norm rather than the exception to me, just as for the division of manual labor. The main thing that seems somewhat unusual is making that explicit.

Comment by max_daniel on My mistakes on the path to impact · 2020-12-07T22:38:19.179Z · EA · GW

I'm somewhat sympathetic to the frustration you express. However, I suspect the optimal response isn't to be more or less epistemically modest indiscriminately. Instead, I suspect the optimal policy is something like:

  • Always be clear and explicit to what extent a view you're communicating involves deference to others.
  • Depending on the purpose of a conversation, prioritize (possibly at different stages) either object-level discussions that ignore others' views or forming an overall judgment that includes epistemic deference.
    • E.g. when the purpose is to learn, or to form an independent assessment of something, epistemic deference will often be a distraction.
    • By contrast, if you make a decision with large-scale and irreversible effects on the world (e.g. "who should get this $5M grant?") I think it would usually be predictably worse for the world to ignore others' views.
      • On the other hand, I think literally everyone using the "average view" among the same set of people is suboptimal even for such purposes: it's probably better to have less correlated decisions in order to "cover the whole space" of reasonable options. (This might seem odd on some naive models of decision-making, but I think can easily be justified by plausible toy models involving heavy-tailed ex-post returns that are hard to predict ex-ante plus barriers to coordination or undetectable but different biases. I.e. similar to why it would probably be bad if every VC based their investment decisions on the average view of all VCs.)

I think this way one can largely have the best of both worlds.

(I vaguely remember that there is a popular post making a similar recommendation, but couldn't quickly find it.)

Comment by max_daniel on Lotteries for everything? · 2020-12-05T00:53:01.803Z · EA · GW

I looked into using lotteries for funding a few years ago. I didn't really come to a strong conclusion either way, but some of the background considerations described in the post might still be interesting.

Comment by max_daniel on If someone identifies as a longtermist, should they donate to Founders Pledge's top climate charities than to GiveWell's top charities? · 2020-11-26T14:34:20.054Z · EA · GW

Also, I'm not sure how the donation lottery is a good opportunity from a long-term impact perspective. If I were a pure longtermist I would just trust the EA LTFF

I agree that asking whether oneself expects to make higher-impact grants than EA Funds is a key question here.

However, note that you retain the option to give to EA Funds if you win the donor lottery. So in this sense the donor lottery can't be worse than giving to EA Funds directly, unless you think that winning itself impairs your judgment or similar (or causes you to waste time searching for alternatives, or ...).

Also, I do think that at least some donors will be able to make better grants than EA Funds. Yes, EA Fund managers have more grantmaking experience. However, they are also quite time-constrained, and so a donor lottery winner may be able to invest more time per grant/money granted. 

In addition, donors may possess idiosyncratic knowledge that would be too costly to transfer to fund managers. For example, suppose there was a great opportunity to fund biosecurity policy work in the Philippines - it might be more likely that a member of EA Philippines hears about, and is able to evaluate, this opportunity than an EA Funds member (e.g. because this requires a lot of background knowledge on the country). [This is a hypothetical example to illustrate the idea, I don't want to make a claim that this specifically is likely.]

These points are also explained in more detail in the post on donor lotteries I linked to.

Comment by max_daniel on If someone identifies as a longtermist, should they donate to Founders Pledge's top climate charities than to GiveWell's top charities? · 2020-11-26T14:27:06.210Z · EA · GW

Could you clarify what you mean by "narrow impact perspective"?

That was unclear, sorry. I again meant impact from just the funded charity's work. As opposed to effects on the motivation or ability to acquire resources of the donor, etc.

Comment by max_daniel on If someone identifies as a longtermist, should they donate to Founders Pledge's top climate charities than to GiveWell's top charities? · 2020-11-26T09:57:31.548Z · EA · GW

[Giving just my impression before updating on others people's views.]

Very briefly:

  • I think donating to GiveWell top charities (and more generally donating to charities that have been selected not primarily for their long-term effects) clearly doesn't maximize long-term impact, at least at first glance. I think this is shown by arguments such as the following:
  • In some cases, there may be reasons other than the long-term effect of the funded charity's work in favor of giving to GiveWell charities. For example, perhaps this better maintains someone's motivation and altruism, thus increasing the long-term impact of their non-donation activities. Or perhaps this will better allow them to share their excitement for effective altruism with others, thus allowing them to acquire more resources, including for long-term causes.
    • However, I'm skeptical that these reasons are often decisive, except maybe in some extremely idiosyncratic cases.
  • I don't have much of a view on FP's climate change charities in particular. My best guess is they are higher-impact than GiveWell charities from a long-term perspective. However, I'd also guess there are other options that are even better from just a narrow impact perspective. Examples include:
  • It's much more plausible to me that among options that have been selected for having 'reasonably high long-term impacts' "secondary" considerations such as the ones mentioned above can be decisive (i.e. effect on motivation or ability to promote EA, etc.).
Comment by max_daniel on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-11-23T14:57:54.801Z · EA · GW

Yes, I meant "less than expected". 

Among your three points, I believe something like 1 (for an appropriate reference class to determine "typical", probably something closer to 'early-stage fields' than 'all fields'). Though not by a lot, and I also haven't thought that much about how much to expect, and could relatively easily be convinced that I expected too much.

I don't think I believe 2 or 3. I don't have much specific information about assumptions made by people who advocated for or funded macrostrategy research, but a priori I'd find it surprising if they had made these mistakes to a strong extent.

Comment by max_daniel on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-11-23T12:13:23.243Z · EA · GW

Done, thanks for reminding me :)

Comment by max_daniel on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-11-23T12:12:37.639Z · EA · GW

My quick take:

  • I agree with other answers that in terms of "discrete" insights, there probably wasn't anything that qualifies as "major" and "novel" according to the above definitions.
  • I'd say the following were the three major broader developments, though unclear to what extent they were caused by macrostrategy research narrowly construed:
    • Patient philanthropy: significant development of the theoretical foundations and some practical steps (e.g. the Founders Pledge research report on potentially setting up a long-term fund).
      • Though the idea and some of the basic arguments probably aren't novel, see this comment thread below.
    • Reduced emphasis on a very small list of "top cause areas". (Visible e.g. here and here, though of course there must have been significant research and discussion prior to such conclusions.)
    • Diversification of AI risk concerns: less focus on "superintelligent AI kills everyone after rapid takeoff because of poorly specified values" and more research into other sources of AI risk.
      • I used to think there was less actual as opposed to publicly visible change, and less due to new research to the extent there was change. But it seems that a perception of significant change is more common.

In previous personal discussions, I think people have made fair points around my bar maybe being generally unreasonable. I.e. it's the default for any research field that major insights don't appear out of nowhere, and that it's almost always possible to find similar previous ideas: in other words, research progress being the cumulative effect of many small new ideas and refinements of them.

I think this is largely correct, but that it's still correct to update negatively on the value of research if past progress has been less good on the spectra of majority and novelty. However, overall I'm now most interested in the sort of question asked here to better understand what kind of progress we're aiming for rather than for assessing the total value of a field.

FWIW, here are some suggestions for potential "major and novel" insights others have made in personal communication (not necessarily with a strong claim made by the source that they meet the bar, also in some discussions I might have phrased my questions a bit differently):

  • Nanotech / atomically precise manufacturing / grey goo isn't a major x-risk
    • [NB I'm not sure that I agree with APM not being a major x-risk, though 'grey goo' specifically may be a distraction. I do have the vague sense that some people in, say, the 90s or until the early 2010s were more concerned about APM then the typical longtermist is now.]
    • My comments were: 
      • "Hmm, maybe though not sure. Particularly uncertain whether this was because new /insights/ were found or just due to broadly social effects and things like AI becoming more prominent?"
      • "Also, to what extent did people ever believe this? Maybe this one FHI survey where nanotech was quite high up the x-risk list was just a fluke due to a weird sample?"
    • Brian Tomasik pointed out: "I think the nanotech-risk orgs from the 2000s were mainly focused on non-grey goo stuff: http://www.crnano.org/dangers.htm"
  • Climate change is an x-risk factor
    • My comment was: "Agree it's important, but is it sufficiently non-obvious and new? My prediction (60%) is that if I asked Brian [Tomasik] when he first realized that this claim is true (even if perhaps not using that terminology) he'd point to a year before 2014."
  • We should build an AI policy field
    • My comment was: "[snarky] This is just extremely obvious unless you have unreasonably high credence in certain rapid-takeoff views, or are otherwise blinded by obviously insane strawman rationalist memes ('politics is the mind-killer' [aware that this referred to a quite different dynamic originally], policy work can't be heavy-tailed [cf. the recent Ben Pace vs. Richard Ngo thing]). [/snarky]
    • I agree that this was an important development within the distribution of EA opinions, and has affected EA resource allocation quite dramatically. But it doesn't seem like an insight that was found by research narrowly construed, more like a strategic insight of the kind business CEOs will sometimes have, and like a reasonably obvious meme that has successfully propagated through the community."
  • Surrogate goals research is important
    • My comment was: "Okay, maaybe. But again 70% that if I asked Eliezer when he first realized that surrogate goals are a thing, he'd give a year prior to 2014."
  • Acausal trade, acausal threats, MSR, probable environment hacking
    • My comment was: "Aren't the basic ideas here much older than 5 years, and specifically have appeared in older writings by Paul Almond and have been part of 'LessWrong folklore' for a while?Possible that there's a more recent crisp insight around probable environment hacking -- don't really know what that is."
  • Importance of the offense-defense balance and security
    • My comment was: "Interesting candidate, thanks! Haven't sufficiently looked at this stuff to have a sense of whether it's really major/important. I am reasonably confident it's new."
      • [Actually I'm not a bit puzzled why I wrote the last thing. Seems new at most in terms of "popular/widely known within EA"?]
  • Internal optimizers
    • My comment was: "Also an interesting candidate. My impression is to put it more in the 'refinement' box, but that might be seriously wrong because I think I get very little about this stuff except probably a strawman of the basic concern."
  • Bargaining/coordination failures being important
    • My comment was: "This seems much older [...]? Or are you pointing to things that are very different from e.g. the Racing to the Precipice paper?"
  • Two-step approaches to AI alignment
    • My comment was: "This seems kind of plausible, thanks! It's also in some ways related to the thing that seems most like a counterexample to me so far, which is the idea of a 'Long Reflection'. (Where my main reservation is whether this actually makes sense / is desirable [...].)"
  • More 'elite focus'
    • My comment was: "Seems more like a business-CEO kind of insight, but maybe there's macrostrategy research it is based on which I'm not aware of?"
Comment by max_daniel on Nuclear war is unlikely to cause human extinction · 2020-11-08T00:05:34.509Z · EA · GW

The main reason I wanted to write this post is that a lot of people, including a number in the EA community, start with the conception that a nuclear war is relatively likely to kill everyone, either for nebulous reason or because of nuclear winter specifically.

This agrees with my impression, and I do think it's valuable to correct this misconception. (Sorry, I think it would have been better and clearer if I had said this in my first comment.) This is why I favor work with somewhat changed messaging/emphasis over no work.

It feels like I disagree with you on the likelihood that a collapse induced by nuclear war would lead to permanent loss of humanity's potential / eventual extinction.

I'm not sure we disagree. My current best guess is that most plausible kinds of civilizational collapse wouldn't be an existential risk, including collapse caused by nuclear war. (For basically the reasons you mention.) However, I feel way less confident about this than about the claim that nuclear war wouldn't immediately kill everyone. In any case, my point was not that I in fact think this is likely, but just that it's sufficiently non-obvious that it would be costly if people walked away with the impression that it's definitely not a problem.

I'm planning to follow this post with a discussion of existential risks from compounding risks like nuclear war, climate change, biotech accidents, bioweapons, & others.

This sounds like a very valuable topic, and I'm excited to see more work on it. 

FWIW, my guess is that you're already planning to do this, but I think it could be valuable to carefully consider information hazards before publishing on this [both because of messaging issues similar to the one we discussed here and potentially on the substance, e.g. unclear if it'd be good to describe in detail "here is how this combination of different hazards could kill everyone"]. So I think e.g. asking a bunch of people what they think prior to publication could be good. (I'd be happy to review a post prior to publication, though I'm not sure if I'm particularly qualified.)

Comment by max_daniel on Nuclear war is unlikely to cause human extinction · 2020-11-07T17:00:14.401Z · EA · GW

I agree that nuclear war - and even nuclear winter - would be very unlikely to directly cause human extinction. My loose impression is that other EAs who have looked into this agree as well.

However, I'm not sure if it's good to publicize work on existential risk from nuclear war under this headline, and with this scope. Here is why:

  • You only discuss whether nuclear war would somewhat directly cause human extinction - i.e. by either immediately killing everyone, or causing everyone to starve within, say, the next 20 years. However, you don't discuss whether nuclear war could cause a trajectory change of human civilization that make it more vulnerable to future existential risk. For example, if nuclear war would cause an irrecoverable loss of post-industrial levels of technology that would arguably constitute an existential catastrophe itself (by basically removing the chance of close-to-optimal futures) and also make humanity more vulnerable to natural extinction risk (e.g. they can no longer do asteroid deflection). FWIW, I think the example I just gave is fairly unlikely as well; my point here just is that your post doesn't tell us anything about such considerations. It would be entirely consistent with all evidence you present to think that nuclear war is a major indirect existential risk (in the sense just discussed).
  • For this reason, I in particular disagree "that the greatest existential threat from nuclear war appears to be from climate impacts" (as you say in the conclusion). I think that in fact the possibly greatest existential threat from nuclear war is a negative trajectory change precipitated by 'the collapse of civilization', though we don't really know how likely that is or whether this would in fact be negative on extremely long timescales.
  • (Note I'm not intending for this to just be a special case of the true but somewhat vacuous general claim that, for all we know, literally any event could cause a negative or positive trajectory change. The point is that the unprecedented damage caused by large-scale nuclear war seems unusually likely to cause a trajectory change in either direction.)
  • [Less important:] I'm somewhat less optimistic than you about point 3C, i.e. nuclear war planners being aware of nuclear winter. I agree they are aware of the risk. However, I'm not sure if they have incentives to care. They might not care if they view "large-scale nuclear war that causes all our major cities to be destroyed" or "nuclear war that leads to a total defeat by an adversary" as essentially the worst possible outcomes, which seems at least plausible to me. Certainly I think they won't care about the risk as much as a typical longtermist - from an impartial perspective, even a, say, 1% risk of nuclear winter would be very concerning, whereas it could plausibly be a minor consideration when planning nuclear war from a more parochial perspective. Perhaps even more importantly, even if they did care as much as a longtermist, it's not clear to me if the strategic dynamics allow them to adjust their policies. For example, a nuclear war planner may well think that only a 'countervalue' strategy of targeting adversaries' population centers has a sufficient deterrence effect.

So overall I think our epistemic situation is: We know that one type of existential risk from nuclear war is very small, but we don't really have a good idea for how large total existential risk from nuclear war is. It's of course fine, and often a good idea for tractability or presentation reasons, to focus on only one aspect of a problem. But given this epistemic situation, I think the cost of spreading a message that can easily be rounded off to "nuclear war isn't that dangerous [from a longtermist perspective]" are high, particularly since perceptions that nuclear war would be extremely bad may be partly causally responsible for the fact that we haven't yet seen one.

Note I'm not claiming that this post by itself has large negative consequences. No nuclear power is going to chance their policies because of an EA Forum post. But I'd be concerned if there was a growing body of EA work with a messaging like this. For future public work I'd feel better if the summary was more like "nuclear war wouldn't kill every last human within a few decades, but is still extremely concerning from both a long-termist and present-generation perspective" + some constructive implications (e.g. perhaps focus more on how to make post-collapse recovery more likely or to go well).

Comment by max_daniel on Nuclear war is unlikely to cause human extinction · 2020-11-07T16:21:45.723Z · EA · GW

There are four different spellings of 'Reisner' (which is correct) in this paragraph:

Alan Robock’s group published a paper in 2007 that found significant cooling effects even from a relatively limited regional war. A group from Los Alamos, Reisner et al, published a paper in 2018 that reexamined some of the assumptions that went into Robock et al’s model, and concluded that global cooling was unlikely in such a scenario. Robock et al. responded, and Riesner et al responded to the response. Both authors bring up good points, but I find Rieser’s position more compelling. This back and forth is worth reading for those who want to investigate deeper. Unfortunately Reiser’s group has not published an analysis on potential cooling effects from a modern full scale nuclear exchange, rather than a limited regional exchange. Even so, it’s not hard to extrapolate that Reiser’s model would result in far less cooling than Robock’s model in the equivalent situation.

Comment by max_daniel on Nuclear war is unlikely to cause human extinction · 2020-11-07T16:15:26.412Z · EA · GW

Two different analyses are required to calculate the chances of human extinction from nuclear winter. The first is the analysis of the climate change that could result from a nuclear war, and the second is the adaptive capacity of human groups to these climate changes. I have not seen an in depth analysis of the former, but I believe such an assessment would be worthwhile.  

Do you mean "I have not seen an in depth analysis of the latter"? I.e. humans' adaptive capacity?

Comment by max_daniel on Evidence, cluelessness, and the long term - Hilary Greaves · 2020-11-02T19:13:10.457Z · EA · GW

My guess is that Buck means something like: "spend my time to identify and execute 'longtermist' interventions, i.e. ones explicitly designed to be best from the perspective of improving the long-term future - rather than spending the time to figure out whether donating to AMF is net good or net bad".

Comment by max_daniel on The end of the Bronze Age as an example of a sudden collapse of civilization · 2020-10-29T13:24:52.259Z · EA · GW

Thanks! Interesting to hear what kind of evidence we have that points toward droughts and volcanic eruptions.

Note that overall I'm very uncertain how much to discount the Hekla eruption as a key cause based on the uncertain dating. This is literally just based on one sentence in a Wikipedia article, and I didn't consult any of the references. It certainly seems conceivable to me that we could have sufficiently many and strong other sources of evidence that point to a volcanic eruption that we overall should have very high credence that the eruption of Hekla or another volcano was a crucial cause.

Comment by max_daniel on The end of the Bronze Age as an example of a sudden collapse of civilization · 2020-10-28T17:38:12.837Z · EA · GW

The Late Bronze Age collapse is an interesting case I'd love to see more work on. Thanks a lot for posting this.

I once spent 1h looking into this as part of a literature review training exercise. Like you, I got the impression that there likely was a complex set of interacting causes rather than a single one. I also got the sense, perhaps even more so than you, that the scope, coherence, dating, and causes are somewhat controversial and uncertain. In particular, I got the impression that it's not clear whether the eruption of the Hekla volcano played a causal role since some (but not all) papers estimate it occurred after the collapse.

I'll paste my notes below, but obviously take them with a huge grain of salt given that I spent only 1h looking into this and had no prior familiarity with the topic.

Late Bronze Age collapse, also known as 3.2 ka event

Wikipedia:

  • Eastern Mediterranean
  • Quick: 50 years, 1200-1150 BCE
  • Causes: “Several factors probably played a part, including climatic changes (such as those caused by volcanic eruptions), invasions by groups such as the Sea Peoples, the effects of the spread of iron-based metallurgy, developments in military weapons and tactics, and a variety of failures of political, social and economic systems.”

In a recent paper, Knapp & Manning (2016) conclude the collapse had several causes and more research is needed to fully understand them:

“There is no final solution: the human-induced Late Bronze Age ‘collapse’ presents multiple material, social, and cultural realities that demand continuing, and collaborative, archaeological, historical, and scientific attention and interpretation.”

“Among them all, we should not expect to find any agreed-upon, overarching explanation that could account for all the changes within and beyond the eastern Mediterranean, some of which occurred at different times over nearly a century and a half, from the mid to late 13th throughout the 12th centuries B.C.E. The ambiguity of all the relevant but highly complex evidence—material, textual, climatic, chronological—and the very different contexts and environments in which events and human actions occurred, make it difficult to sort out what was cause and what was result. Furthermore, we must expect a complicated and multifaceted rather than simple explanatory framework. Even if, for example, the evidence shows that there is (in part) a relevant significant climate trigger, it remains the case that the immediate causes of the destructions are primarily human, and so a range of linking processes must be articulated to form any satisfactory account.”

While I’ll mostly focus on causes, note that also the scope of the collapse and associated societal transformation is at least somewhat controversial. E.g. Small:

“Current opinions on the upheaval in Late Bronze Age Greece state that the change from the Late Bronze Age to the Geometric period 300 years later involved a transformation from a society based upon complex chiefdoms or early states to one based upon less complex forms of social and political structure, often akin to bigman societies. I will argue that such a transformation was improbable and that archaeologists have misinterpreted the accurate nature of this change because their current models of Late Bronze Age culture have missed its real internal structure. Although Greece did witness a population decline and a shift at this time, as well as a loss of some palatial centers, the underlying structure of power lay in small-scale lineages and continued to remain there for at least 400 years.”

By contrast, Dickinson:

“In the first flush of the enthusiasm aroused by the decipherment of the Linear B script as Greek, Wace, wishing to see continuity of development from Mycenaean Greeks to Classical Greeks, attempted to minimize the cultural changes involved in the transition from the period of the Mycenaean palaces to later times (1956, xxxiii-xxxiv). However, it has become abundantly clear from detailed analysis of the Linear B material and the steadily accumulating archaeological evidence that this view cannot be accepted in the form in which he proposed it. There was certainly continuity in many features of material culture, as in the Greek language itself, but the Aegean world of the period following the Collapse was very different from that of the period when Mycenaean civilization was at its height, here termed the Third Palace Period. Further, the differences represent not simply a change but also a significant deterioration in material culture, which was the prelude to the even more limited culture of the early stages of the Iron Age.” (emphases mine)

Causes that have been discussed in the literature

  • Environmental
    • [This PNAS paper argues against climate-based causes, but at first glance seems to be about a slightly later collapse in Northwest Europe.]
    • Hekla volcano eruption, maybe also other volcano eruptions
      • But dating controversial: “dates for the Hekla 3 eruption range from 1021 BCE (±130)[36] to 1135 BCE (±130)[37] and 929 BCE (±34).[38][39]” (Wikipedia)
      • Buckland et al. (1997) appear to argue against volcano-hypotheses, except for a few specific cities
    • Drought
      • Bernard Knapp; Sturt w. Manning (2016). "Crisis in Context: The End of the Late Bronze Age in the Eastern Mediterranean". American Journal of Archaeology. 120: 99
      • Kaniewski et al. (2015) – review of drought-based theories
      • Matthews (2015)
      • Langguth et al. (2014)
      • Weiss, Harvey (June 1982). "The decline of Late Bronze Age civilization as a possible response to climatic change". Climatic Change. 4 (2): 173–198
      • Middleton, Guy D. (September 2012). "Nothing Lasts Forever: Environmental Discourses on the Collapse of Past Societies". Journal of Archaeological Research. 20 (3): 257–307
    • Earthquakes
    • Epidemics (mentioned by Knapp & Manning)
  • Outside invasion
    • By unidentified ‘Sea Peoples’
    • For Greece: by ‘Dorians’
    • By broader ‘great migrations’ of peoples from Northern and Central Europe into the East Mediterranean
    • Dickinson against invasion theories: “General loss of faith in ‘invasion theories’ as explanations of cultural change, doubts about the value of the Greek legends as sources for Bronze Age history, and closer dating of the sequence of archaeological phases have undermined the credibility of this reconstruction, and other explanations for the collapse have been proposed.”
  • Technology (as a cause for why the chariot-based armies of the Late Bronze Age civilizations became non-competitive)
    • Ironworking
      • Palmer, Leonard R (1962). Mycenaeans and Minoans: Aegean Prehistory in the Light of the Linear B Tablets. New York, Alfred A. Knopf
    • Changes in warfare: large infantry armies with new (bronze) weapons
      • Drews, R. (1993). The End of the Bronze Age: Changes in Warfare and the Catastrophe ca. 1200 B.C. (Princeton)
  • Internal problems
    • “political struggles within the dominant polities” (mentioned by Knapp & Manning)
    • “inequalities between centers and peripheries” (mentioned by Knapp & Manning)
  • Synthesis: general systems collapse a la Tainter

Types of evidence

  • “material, textual, climatic, chronological” (Knapp & Manning 2016)
    • Textual evidence very scarce (Robbins)
    • Archeological evidence inconclusive, can be interpreted in different ways (Robbins)
Comment by max_daniel on When you shouldn't use EA jargon and how to avoid it · 2020-10-27T18:32:49.326Z · EA · GW

As one data point, I had to google what Stranger in a Strange Land refers to, and don't know what connotations the comment above yours [1] refers to. I always assumed 'grok' was just a generic synonym for '(deeply) understand', and didn't even particularly associate it with the EA community. (Maybe it's relevant here that I'm not a native speaker.)

 

[1] Replacing the jargon term 'grandparent' ;)