Posts

Existential Risk Conference 7-8 Oct 2021: videos and timestamps 2021-10-11T02:31:25.577Z
Is SARS-CoV-2 a modern Greek Tragedy? 2021-05-10T04:25:37.019Z
Are Humans 'Human Compatible'? 2019-12-06T05:49:12.311Z

Comments

Comment by Matt Boyd on Carl Shulman on the common-sense case for existential risk work and its practical implications · 2021-10-16T08:59:27.609Z · EA · GW

I really liked this episode, because of Carl's no nonsense moderate approach. Though I must say that I'm a bit surprised that it appears that some in the EA community see the 'commonsense argument' as some kind of revelation. See for example the 80,000 email newsletter that comes via Benjamin Todd ("Why reducing existential risk should be a top priority, even if you don’t attach any value to future generations", 16 Oct, 2021).  I think this argument is just obvious, and is easily demonstrated through relatively simple life-year or QALY calculations. I said as much in my 2018 paper on New Zealand and Existential Risks (see p.63 here). I thought I was pretty late to the party at that point, and Carl was probably years down the track. 

However, if this argument is not widely understood (and that's a big 'if' because I think really it should be pretty easy for anyone to have deduced), then I wonder why? Maybe this is because the origins of the EA focus on x-risk hark back to papers like the 'Astronomical Waste' arguments etc, which basically take long-termism as the starting point and then argue for the importance of existential risk reduction. Whereas if you take government cost-effectiveness analysis (CEA) as the starting point, especially the domain of healthcare where cost-per-QALY is the currency. Then existential risk just looks like a limiting case of these CEAs and the priority they hold just emerges in the calculation (when only considering  THE PRESENT generation).

The real question then becomes, WHY don't government risk assessments and CEAs plug in the probabilities and impacts for x-risk? Two key suppositions are unfamiliarity (ie a knowledge gap) or intractability (ie a lack of policy response options). Whereas both of these have now progressed substantially. 

The reason all this is important is because in the eyes of government policymakers and more importantly Ministers with power to make decisions about resource allocation, longtermism (especially in its strong form) is seen as somewhat esoteric and disconnected from day to day business. Whereas it seems the objectives of strong longtermism (if indeed it stands up to empirical challenges, eg how Fermi's paradox is resolved will have implications for the strength of strong longtermism) can be met through simple ordinary CEA arguments. Or at least such arguments can be used for leverage. To actually achieve the goals of longtermism it seems like MUCH more work needs to be happening in translational research to communicate academic x-risk work into policymakers' language for instrumental ends, not necessarily in strictly 'correct' ways. 

Comment by Matt Boyd on Major UN report discusses existential risk and future generations (summary) · 2021-10-07T22:30:16.923Z · EA · GW

I am also surprised that there are few comments here. Given the long and detailed technical quibbles that often append many of the rather esoteric EA posts it surprises me that where there is an opportunity to shape tangible influences at a global scale there is silence. I feel that there are often gaps in the EA community in the places that would connect research and insight with policy and governance. 

Sean is right, there has been accumulating interest in this space. Our paper on the UN and existential risks in 'Risk Analysis' (2020) was awarded 'best paper' by that journal, and I suspect these kind of sentiments by the editors and many many others in the risk community have finally leaned upon the UN in sufficient weight, marshalled by the SG's generally sympathetic disposition. 

The UN calls for futures and foresight capabilities across countries and there is much scope for pressure on policy makers in every nation  to act and establish such institutions. We have a forthcoming paper (November) in the New Zealand journal 'Policy Quarterly' that calls for a Parliamentary Commissioner for Extreme Risks to be supported by a well-resourced office and working in conjunction with a Select Committee. The Commissioner could offer support to CEOs of public sector organisations as they complete the newly legislated 'long-term insights briefings' that are to be tabled in Parliament from 2022. 

I advocate for more work of this kind, but projects that 'merely' translate technical philosophical and ethical academic products into policy advocacy pieces don't seem to generate funding. Yet, they may have the greatest impact. It matters not whether a paper is cited 100 times, it matters very much if the Minister with decision making capability is swayed by a well argued summary of the literature. 

Comment by Matt Boyd on A Sequence Against Strong Longtermism · 2021-07-23T09:19:50.577Z · EA · GW

Thanks for collating all of this here in one place. I should have read the later posts before I replied to the first one. Thank you too for your bold challenge. I feel like Kant waking from his 'dogmatic slumber'. A few thoughts:

  1. Humanity is an 'interactive kind' (to use Hacking's term). Thinking about humanity can change humanity, and the human future.
  2. Therefore, Ord's 'Long Reflection' could lead to there being no future humans at all (if that was the course that the Long Reflection concluded). 
  3. This simple example shows that we cannot quantify over future humans, quadrillions or otherwise, or make long term assumptions about their value. 
  4. You're right about trends, and in this context the outcomes are tied up with 'human kinds', as humans can respond to predictions and thereby invalidate the prediction. Makes me think of Godfrey-Smith's observation that natural selection has no inertia, change the selective environment and the observable 'trend' towards some adaptation (trend) vanishes. 
  5. Cluelessness seems to be some version of the Socratic Paradox (I know only that I know nothing).
  6. RCTs don't just falsify hypotheses, but also provide evidence for causal inference (in spite of hypotheses!) 
Comment by Matt Boyd on A case against strong longtermism · 2021-07-23T08:20:42.184Z · EA · GW

Hi Vaden, 

I'm a bit late to the party here, I know. But I really enjoyed this post. I thought I'd add my two cents worth. Although I have a long term perspective on risk and mitigation, and have long term sympathies, I don't consider myself a strong longtermist. That said, I wouldn't like to see anyone (eg from policy circles) walk away from this debate with the view that it is not worth investing resources in existential risk mitigation. I'm not saying that's what necessarily comes through, but I think there is important middle ground (and this middle ground may actually instrumentally lead to the outcomes that strong longtermists favour, without the need to accept the strong longtermist position). 

I think it is just obvious that we should care about the welfare of people here and now. However, the worst thing that can happen to people existing now is for all of them to be killed. So it seems clear that funnelling some resources into x-risk mitigation, here and now, is important. And the primary focus should always be those x-risks that are most threatening in the near term (and the target risks will no doubt change with time, eg I would say it is biotechnology in the next 5-10 years, then perhaps climate or nuclear, and then AI, followed by rarer natural risks, or emerging technological risks, etc while all the while building cross-cutting defences such as institutions and resilience). As you note, every generation becomes the present generation and every x-risk will have it's time. We can't ignore future x-risks, for this very reason. Each future risk 'era' will become present and we had better be ready. So resources should be invested in future x-risks, or at least in understanding their timing. 

The issue I have with strong-longtermism lies in the utility calculations. The Greaves/MacAskill paper presents a table of future human lives that is based on the carrying capacity of the Earth, solar system, etc. However, even here today we do not advocate some imperative that humans must reproduce right up to the carrying capacity of the Earth. In fact many of us think this would be wrong for a number of reasons. To factor 'quadrillions' or any definite number at all into the calculations is to miss the point that we (the moral agents) get to determine (morally speaking) the right number of future people, and we might not know how many this is yet. Uncertainty about moral progress means that we cannot know what the morally correct number is, because theory and argument might evolve across time (and yes, it's probably obvious but I don't accept that non-actual, and never-actual people can be harmed, and I don't accept that non-existence is a harm). 

However, there seems to be value in SOME humans persisting in order that these projects might be continued and hopefully resolved. Therefore, I don't think we should be putting speculative utilities into our 'in expectation' calculations. There are independent arguments for preventing x-risk than strong-longtermism, and the emotional response it generates from many, potentially including aversive policymakers makes it a risky strategy to push. Even if EA is to be motivated by strong-longtermism, it may be useful to advocate an 'instrumental' theory of value in order to achieve the strong-longtermist agenda. There is a possibility that some of EA's views can themselves be an information hazard. Being right is not always being effective, and therefore not always altruistic. 

**

Comment by Matt Boyd on Is SARS-CoV-2 a modern Greek Tragedy? · 2021-05-10T23:30:29.432Z · EA · GW

Thanks for this response. I guess the motivation for me writing this yesterday was a comment from a member of NZ's public sector, who said basically 'the Atomic Scientists article falls afoul of the principle of parsimony'. So I wanted to give the other side, ie there actually are some reasons to think lab-leak rather than parsimonious natural explanation. So I completely take your point about balance, but the idea is part of a dialogue rather than a comprehensive analysis, that could have been clearer. Cheers. 

Comment by Matt Boyd on Is SARS-CoV-2 a modern Greek Tragedy? · 2021-05-10T21:28:38.809Z · EA · GW

Thanks for these. Super interesting credences here, 19% (that health organisations will conclude lab origin) to 83% (that gain of function was in fact contributory). I guess the strikingly wide range suggests genuine uncertainty. Watch this space with interest. 

Comment by Matt Boyd on Are Humans 'Human Compatible'? · 2019-12-06T20:20:43.292Z · EA · GW

Great additional detail, thanks!

Comment by Matt Boyd on Eight high-level uncertainties about global catastrophic and existential risk · 2019-12-05T08:52:00.146Z · EA · GW

Another one to consider, assuming you see it at the same level of analysis as the 8 above, is the spatial trajectory through which the catastrophe unfolds. E.g. a pandemic will spread from an origin(s) and I'm guessing is statistically likely to impact certain well-connected regions of the world first. Or a lethal command to a robot army will radiate outward from the storage facility for the army. Or nuclear winter will impact certain regions sooner than others. Or Ecological collapse due to an unstoppable biological novelty will devour certain kinds of environment more quickly (same possibly for grey goo), etc. There may be systematic regularities to which spaces on Earth are affected and when. Currently completely unknown. But knowledge of these patterns could help target certain kinds of resilience and mitigation measures to where they are likely to have time to succeed before themselves being impacted.