Posts

Islands, nuclear winter, and trade disruption as a human existential risk factor 2022-08-07T02:18:26.454Z
Be a Stoic and build better democracies: an Aussie-as take on x-risks (review essay) 2021-11-21T04:30:50.746Z
[Creative Writing Contest] The Sequence Matters 2021-10-25T05:16:10.074Z
Existential Risk Conference 7-8 Oct 2021: videos and timestamps 2021-10-11T02:31:25.577Z
Is SARS-CoV-2 a modern Greek Tragedy? 2021-05-10T04:25:37.019Z
Are Humans 'Human Compatible'? 2019-12-06T05:49:12.311Z

Comments

Comment by Matt Boyd on [deleted post] 2022-08-23T01:50:34.310Z

Hi Ross, here's the paper that I mentioned in my comment above (this pre-print uses some data from Xia et al 2022 in its preprint form, and their paper has just been published in Nature Food with some slightly updated numbers, so we'll update our own once the peer review comes back, but the conclusions etc won't change): https://www.researchsquare.com/article/rs-1927222/v1

We're now starting a 'NZ Catastrophe Resilience Project' to more fully work up the skeleton details that are listed in Supplementary Table S1 of our paper. Engaging with public sector, industry, academia etc. Australia could do exactly the same. 

Note that in the Xia paper, NZ's food availability is vastly underestimated due to quirks of the UNFAO dataset. For an estimate of NZ's export calories see our paper here: https://www.medrxiv.org/content/10.1101/2022.05.13.22275065v1 

And we've posted here on the Forum about all this here: https://forum.effectivealtruism.org/posts/7arEfmLBX2donjJyn/islands-nuclear-winter-and-trade-disruption-as-a-human 

Comment by Matt Boyd on Prioritizing x-risks may require caring about future people · 2022-08-17T23:45:57.679Z · EA · GW

I generally think that all these kinds of cost-effectiveness analyses around x-risk are wildly speculative and susceptible to small changes in assumptions. There is literally no evidence that the $250b would change bio-x-risk by 1% rather than, say, 0.1% or 10%, or even 50%, depending on how it was targeted and what developments it led to. On the other hand if you do successfully reduce the x-risk by, say, 1%, then you most likely also reduce the risk/consequences of all kinds of other non-existential bio-risks, again depending on the actual investment/discoveries/developments, so the benefit of all the 'ordinary' cases must be factored in. I think that the most compelling argument for investing in x-risk prevention without consideration of future generations, is simply to calculate the deaths in expectation (eg using Ord's probabilities if you are comfortable with them) and to rank risks accordingly. It turns out that at 10% this century, AI risks 8 million lives per annum (obviously less than that early century, perhaps greater late century) and bio-risk is 2.7 million lives per annum in expectation (ie 8 billion  x 0.0333 x 0.01). This can be compared to ALL natural disasters which Our World in Data reports kill ~60,000 people per annum. So there is an argument that we should focus on x-risk to at least some degree purely on expected consequences. I think its basically impossible to get robust cost-effectiveness estimates for this kind of work, and most of the estimates I've seen appear implausibly cost-effective. Things never go as well as you though they would in risk mitigation activities. 

Comment by Matt Boyd on Islands, nuclear winter, and trade disruption as a human existential risk factor · 2022-08-07T21:30:28.354Z · EA · GW

Hi Christian, thanks for your thoughts. You're right to note that islands like Iceland, Indonesia, NZ, etc are also where there's a lot of volcanic activity. Mike Cassidy and Lara Mani briefly summarize potential ash damage in their post on supervolcanoes here (see the table on effects). Basically there could be severe impacts on agriculture and infrastructure. I think the main lesson is that at least two prepared islands would be good. In different hemispheres. That first line of redundancy is probably the most important (also in case one is a target in nuclear war, eg NZ is probably susceptible to an EMP directed at Australia). 

Comment by Matt Boyd on Let's stop saying 'funding overhang' · 2022-07-15T22:22:27.628Z · EA · GW

That's true in theory. But in practice there are only a (small) finite number of items on the list (those that have been formally investigated with a cost-effectiveness analysis). So once those are all funded, then it would make sense to fund more cost-effectiveness analyses to grow the table.  We don't know how 'worthwhile' it is to fund most things, so they are not on the table. 

Comment by Matt Boyd on Let's stop saying 'funding overhang' · 2022-07-14T22:13:09.805Z · EA · GW

Yes, absolutely, and in almost all cases in health the list of desirable things outstrips the funding bar. The 'league table' of interventions is longer than the fraction of them that are/can be funded. So in health there is basically never an overhang. The same will be true for EA/GCR/x-risk projects too. So I agree there is likely no 'overhang' there either. But it might be that all the possibly worthwhile projects are not yet listed on the 'league table' (whether explicitly or implicitly). 

Comment by Matt Boyd on Let's stop saying 'funding overhang' · 2022-07-13T23:02:12.281Z · EA · GW

Commonly in health economics and prioritisation (eg New Zealand's Pharmaceutical Management Agency) you calculate the cost-effectiveness (eg cost per QALY) for a given medication, and then rank the desired medications from most to least cost-effective. You then take the budget, and distribute the funds from top until they run out. This is where your rule the line (bar). Nothing below gets funded unless more budget is allocated. If there are items below the bar worth doing then there is a funding constraint, if everything has been funded and there are leftover funds then there is a funding overhang. So it depends on how long the list of cost-effective desirable projects is as to whether there is a shortfall, right amount, or overhang, and that depends on people thinking up projects and adding them to the list. An 'overhang' probably stimulates more creativity and thought on potential projects. 

Comment by Matt Boyd on My Most Likely Reason to Die Young is AI X-Risk · 2022-07-05T10:07:28.190Z · EA · GW

Yes, that's true for an individual. Sorry, I was more meaning the 'today' infographic would be for a person born in say 2002, and the 2050 one for someone born in eg 2030.  Some confusion because I was replying about 'medical infographic for x-risks' generally rather than specifically your point about personal risk. 

Comment by Matt Boyd on My Most Likely Reason to Die Young is AI X-Risk · 2022-07-05T01:57:14.506Z · EA · GW

Book review EA Forum post here 

Comment by Matt Boyd on My Most Likely Reason to Die Young is AI X-Risk · 2022-07-05T00:19:08.339Z · EA · GW

The infographic could perhaps have a 'today' and a 'in 2050' version, with the bubbles representing the risks being very small for AI 'today' compared to eg suicide, or cancer or heart disease, but then becoming much bigger in the 2050 version, illustrating the trajectory. Perhaps the standard medical cause of death bubbles shrink by 2050 illustrating medical progress. 

Comment by Matt Boyd on My Most Likely Reason to Die Young is AI X-Risk · 2022-07-04T23:12:54.660Z · EA · GW

We can quibble over the numbers but I think the point here is basically right, and if not right for AI then probably right for biorisk or some other risks. That point being even if you only look at probabilities in the next few years and only care about people alive today, then these issues appear to be the most salient policy areas. I've noted in a recent draft that the velocity of increase in risk (eg from some 0.0001% risk this year, to eg 10% per year in 50 years) results in issues with such probability trajectories being invisible to eg 2-year national risk assessments at present even though area under curve is greater in aggregate than every other risk. But in a sense potentially 'inevitable' (for the demonstration risk profiles I dreamed up) over a human lifetime. This then begs the question of how to monitor the trajectory (surely this is one role of national risk assessment, to invest in 'fire alarms', but this then requires these risks to be included in the assessment so the monitoring can be prioritized).  Persuading policymakers is definitely going to be easier by leveraging decade long actuarial tables than having esoteric discussions about total utilitarianism. 

Additionally, in the recent FLI 'World Building Contest' the winning entry from Mako Yass made quite a point of the fact that in the world he built the impetus for AI safety and global cooperation on this issue came from the development of very clear and very specific scenario development of how exactly AI could come to kill everyone. This is analogous to Carl Sagan/Turco's work on nuclear winter in the early 1980s , a specific picture changed minds. We need this for AI. 

Comment by Matt Boyd on Some clarifications on the Future Fund's approach to grantmaking · 2022-05-10T06:50:25.900Z · EA · GW

Thanks Nick, interesting thoughts, great to see this discussion, and appreciated. Is there a timeline for when the initial (21 March deadline) applications will all be decided? As you say, it takes as long as it takes, but has some implications for prioritising tasks (eg  deciding whether to commit to less impactful, less-scalable work being offered, and the opportunity costs of this). Is there a list of successful applications? 

Comment by Matt Boyd on [deleted post] 2022-04-29T01:21:04.037Z

Rumtin, I think Jack is absolutely right, and our research, in the process of being written up will argue Australia is the most likely successful persisting hub of complexity in a range of nuclear war scenarios. We include a detailed case study of New Zealand (because of familiarity with the issues) but a detailed case study of Australia is begging to be done. There are key issues (mostly focused around trade, energy forms, societal cohesion, infectious disease resilience, awareness of the main risks - not 'radiation' like many public think, and for Australia not climate impacts or food either, which is where most nuclear impact research has focused) that could be improved ahead of time, with co-benefits for climate impact, health, resilience to other catastrophes etc. Australia is indeed uniquely positioned here (for a number of reasons that go beyond 'survival' and into 'resilience' and 'reboot' capacity, etc) and policy should include interconnections with NZ policy (sustaining regional trade, security alliance, etc, we've identified other potentially surviving/thriving regional partners too) Happy to collaborate on this. I can send you a draft of our paper in maybe 2 weeks. 

Comment by Matt Boyd on [deleted post] 2022-04-21T11:08:44.030Z

Updates would be fantastic. 

Comment by Matt Boyd on [deleted post] 2022-04-20T23:54:19.455Z

Thanks Rumtin for this, it's a fantastic resource. One thing I note though is that some of the author listings are out of order (this is actually a problem in Terra's CSVs too where I think maybe some of the content in your database is imported from). For example, item 70 by 'Tang' (who is indeed an author) is actually first-authored by 'Wagman' as per the link. I had this problem using Terra, where I kept thinking I was finding papers I'd previously missed, only to discover they were the same paper but with authors in a different order. Maybe at some point a verification/QC process could be implemented (in both these databases, Terra too, to clean them up a little). Great work! 

Comment by Matt Boyd on Help us make civilizational refuges happen · 2022-04-14T12:19:22.450Z · EA · GW

Bunker on island is probably a robust set-up, at least two given volcanic nature of eg Iceland, New Zealand: https://adaptresearchwriting.com/island-refuges/ Synergies/complementarities in island and bunker work should be explored. We're currently exploring the islands/nuclear winter strand (EA LTFF), and have put in for FTX too. 

Comment by Matt Boyd on Best Countries during Nuclear War · 2022-03-31T22:22:31.140Z · EA · GW

In a previous project we used the UN FAO food Pocketbook, although I think the way they compile data changed after 2012. We used the 'kcal production per capita' metric, from here: https://www.fao.org/publications/card/en/c/a9f447e8-6798-5e82-82b0-a78724bfff03/ 

You can see what we did in the following two papers:

https://pubmed.ncbi.nlm.nih.gov/33886124/

https://onlinelibrary.wiley.com/doi/abs/10.1111/risa.13398

There are FAO CSVs for more recent years available to download here: https://www.fao.org/faostat/en/#data/FBS 

That's one suggestion. 

Comment by Matt Boyd on Modelling the odds of recovery from civilizational collapse · 2022-03-31T03:05:38.732Z · EA · GW

Did you ever start/do this project, as per your linked G-doc?

Comment by Matt Boyd on Best Countries during Nuclear War · 2022-03-31T01:42:30.055Z · EA · GW

Hi, I have quite a lot to say about this, but I'm actually currently writing a research paper on exactly this issue, and will write a full forum post/link-post once it's completed (ETA June-ish). However, a couple of key observations:

  1. Cost of living is likely to be irrelevant in nuclear aftermath as global finance and economics is in tatters (the value of assets will jump around unpredictably, eg mansions less important than electric vehicles if global oil trade ceases), prices will change dramatically according to scarcity, eg food prices. 
  2. Energy independence and food security are probably the most important (>50% combined index value) because without energy food production is slashed to pre-industrial yields, and without food security the risk of unrest is very high. 
  3. Latitude and temperature are less important than the impact on specific countries, eg temperature change is important not mean temperature, tropical crops like rice will die in a single frost. Europe could suffer -20 C or -30 C temperature change according to climate models, which makes agriculture impossible. Yet Iceland with vast fish resources could potentially increase food production. 
  4. Rainfall could have a massive impact. The tropical monsoons could be very disrupted and are essential for agriculture in many areas. 
  5. The could very well be almost no trade taking place in a severe nuclear aftermath as nations struggle internally, or due to fuel shortages (many countries are dependent on oil for agriculture at scale). Without trade many countries are fragile in areas of energy and manufacturing. Many component parts of power generation facilities, electricity & food distribution and communications infrastructure are manufactured in only a few places and within a few months without imports/exports such infrastructure may fail (eg lubricants, spark plugs, transformers, fibre optics, etc). Expect most things to grind to a halt without trade. 

There is a lot more that could be said but you're right that the large South American food producers (Argentina etc) look relatively more promising, as well as the usual suspects NZ & Australia. Though each will have severe problems in an actual nuclear winter and organisation such as food/fuel rationing and distribution from rural to urban areas will be immensely problematic. Not to mention the need for public communication processes to ensure people know there is a plan and survival is possible, again to avoid societal mayhem. Social cohesion, and stability indicators are probably very important. 

One problem with composite indices is that very low scores on one dimension can be masked by reasonable scores on others. Countries should be ruled out if they fail on a critical dimension. 

Finally, the act of 'escaping to' the 'most promising' location is not generalisable, and so the ethics of it are questionable. As Kant notes, the test is 'what if everyone did the same as me, would that undermine the institution in question?' and in this case it seems like the answer is yes. 8 billion people fleeing to Argentina would defeat the purpose of acting ahead of war to maximise the chances of each particular country. Carrying capacity calculations are important here too. I haven't even considered HEMP yet, which could very much complicate matters. 

The following case study is particularly illuminating of the problems even 'good' locations like NZ might suffer: https://www.jstor.org/stable/4313623?refreqid=excelsior%3A166e17f569637767a9caded49a1ced42  contact me if you want the full text. 

Comment by Matt Boyd on Mitigating x-risk through modularity · 2022-01-20T23:07:21.233Z · EA · GW

'Partitioning' is another concept that might be useful. 

Islands as refuge (basically same idea as the city idea above), this paper specifically mentions pandemic as threat and island as solution (ie risk first approach) and also considers nuclear (and other) winter scenarios too (see the Supplementary material): https://pubmed.ncbi.nlm.nih.gov/33886124/ 

I note Alexey's comment here too, broadly agree with his islands/refuge thinking. 

The literature on group selection and species selection in biology might prove useful. You seem to be on to it tangentially with the butterfly example. 

Comment by Matt Boyd on The Unweaving of a Beautiful Thing · 2022-01-07T07:39:38.784Z · EA · GW

I enjoyed this. Would seem to do well as an argument for preventing existential risk from Scheffler's 'the human project' point of view, ie the continuation of transgenerational undertakings that we each contribute a tiny piece to, as opposed to the maximizing total utility approach. Persistence of the whole seems to have emergent merit beyond the lives of the individuals. 

On the other hand it also made me think of the line Chigurh says in 'No Country for Old Men' > "If the rule that you followed brought you to this, of what use was the rule?" Rule = eg not eating meat, being compassionate etc. [note, I believe there IS use in the rules, but the line still haunts me] 

Comment by Matt Boyd on Democratising Risk - or how EA deals with critics · 2022-01-06T08:54:39.630Z · EA · GW

Thanks Carla and Luke for a great paper. This is exactly the sort of antagonism that those not so deeply immersed in the xrisk literature can benefit from, because it surveys so much and highlights the dangers of a single core framework. Alternatives to the often esoteric and quasi-religious far-future speculations that seem to drive a lot of xrisk work are not always obvious to decision makers and that gap means that the field can be ignored as 'far fetched'. Democratisation is a critical component (along with apoliticisation). 

I must say that it was a bit of a surprise to me that TUA is seen as the paradigm approach to ERS. I've worked in this space for about 5-6 years and never really felt that I was drawn to strong-longtermism or transhumanism, or technological progress. ERS seems like the limiting case of ordinary risk studies to me. I've worked in healthcare quality and safety (risk to one person at a time), public health (risk to members of populations) and extinction risk just seems like the important and interesting limit of this. I concur with the calls for grounding in the literature of risk analysis, democracy, and pluralism. In fact in peer reviewed work I've previously called for citizen juries and public deliberation and experimental philosophy in this space (here), and for apolitical, aggregative processes (here), as well as calling for better publicly facing national risk (and xrisk) communication and prioritisation tools (under review with Risk Analysis). 

Some key points I appreciated or reflected on in your paper were: 

  1. The fact that empirical and normative assumptions are often masked by tools and frameworks
  2. The distinction between extinction risk and existential risk. 
  3. The questioning of total utilitarianism (I often prefer a maximin approach, also with consideration of important [not necessarily maximising] value obtained from honouring treaties, equity, etc)
  4. I've never found the 'astronomical waste' claims hold up particularly well under certain resolutions of Fermi's paradox (basically I doubt the moral and empirical claims of TUA and strong longtermism, and yet I am fully committed to ERS)
  5. The point about equivocating over near-term nuclear war and billion year stagnation
  6. Clarity around Ord's 1 in 6 (extinction/existential) - I'm guilty of conflating this
  7. I note that failing to mitigate 'mere' GCRs could also derail certain xrisk mitigation efforts. 

Again, great work. This is a useful and important  broad survey/stimulus, not every paper needs to take a single point and dive to its bottom. Well done. 

Comment by Matt Boyd on [Creative Writing Contest] The Sequence Matters · 2021-11-21T23:46:35.692Z · EA · GW

Thanks for these comments Noumero, much appreciated!

Comment by Matt Boyd on [Creative Writing Contest] The Sequence Matters · 2021-10-28T11:43:40.720Z · EA · GW

Many thanks Jackson :) 

Comment by Matt Boyd on Carl Shulman on the common-sense case for existential risk work and its practical implications · 2021-10-16T08:59:27.609Z · EA · GW

I really liked this episode, because of Carl's no nonsense moderate approach. Though I must say that I'm a bit surprised that it appears that some in the EA community see the 'commonsense argument' as some kind of revelation. See for example the 80,000 email newsletter that comes via Benjamin Todd ("Why reducing existential risk should be a top priority, even if you don’t attach any value to future generations", 16 Oct, 2021).  I think this argument is just obvious, and is easily demonstrated through relatively simple life-year or QALY calculations. I said as much in my 2018 paper on New Zealand and Existential Risks (see p.63 here). I thought I was pretty late to the party at that point, and Carl was probably years down the track. 

However, if this argument is not widely understood (and that's a big 'if' because I think really it should be pretty easy for anyone to have deduced), then I wonder why? Maybe this is because the origins of the EA focus on x-risk hark back to papers like the 'Astronomical Waste' arguments etc, which basically take long-termism as the starting point and then argue for the importance of existential risk reduction. Whereas if you take government cost-effectiveness analysis (CEA) as the starting point, especially the domain of healthcare where cost-per-QALY is the currency. Then existential risk just looks like a limiting case of these CEAs and the priority they hold just emerges in the calculation (when only considering  THE PRESENT generation).

The real question then becomes, WHY don't government risk assessments and CEAs plug in the probabilities and impacts for x-risk? Two key suppositions are unfamiliarity (ie a knowledge gap) or intractability (ie a lack of policy response options). Whereas both of these have now progressed substantially. 

The reason all this is important is because in the eyes of government policymakers and more importantly Ministers with power to make decisions about resource allocation, longtermism (especially in its strong form) is seen as somewhat esoteric and disconnected from day to day business. Whereas it seems the objectives of strong longtermism (if indeed it stands up to empirical challenges, eg how Fermi's paradox is resolved will have implications for the strength of strong longtermism) can be met through simple ordinary CEA arguments. Or at least such arguments can be used for leverage. To actually achieve the goals of longtermism it seems like MUCH more work needs to be happening in translational research to communicate academic x-risk work into policymakers' language for instrumental ends, not necessarily in strictly 'correct' ways. 

Comment by Matt Boyd on Major UN report discusses existential risk and future generations (summary) · 2021-10-07T22:30:16.923Z · EA · GW

I am also surprised that there are few comments here. Given the long and detailed technical quibbles that often append many of the rather esoteric EA posts it surprises me that where there is an opportunity to shape tangible influences at a global scale there is silence. I feel that there are often gaps in the EA community in the places that would connect research and insight with policy and governance. 

Sean is right, there has been accumulating interest in this space. Our paper on the UN and existential risks in 'Risk Analysis' (2020) was awarded 'best paper' by that journal, and I suspect these kind of sentiments by the editors and many many others in the risk community have finally leaned upon the UN in sufficient weight, marshalled by the SG's generally sympathetic disposition. 

The UN calls for futures and foresight capabilities across countries and there is much scope for pressure on policy makers in every nation  to act and establish such institutions. We have a forthcoming paper (November) in the New Zealand journal 'Policy Quarterly' that calls for a Parliamentary Commissioner for Extreme Risks to be supported by a well-resourced office and working in conjunction with a Select Committee. The Commissioner could offer support to CEOs of public sector organisations as they complete the newly legislated 'long-term insights briefings' that are to be tabled in Parliament from 2022. 

I advocate for more work of this kind, but projects that 'merely' translate technical philosophical and ethical academic products into policy advocacy pieces don't seem to generate funding. Yet, they may have the greatest impact. It matters not whether a paper is cited 100 times, it matters very much if the Minister with decision making capability is swayed by a well argued summary of the literature. 

Comment by Matt Boyd on A Sequence Against Strong Longtermism · 2021-07-23T09:19:50.577Z · EA · GW

Thanks for collating all of this here in one place. I should have read the later posts before I replied to the first one. Thank you too for your bold challenge. I feel like Kant waking from his 'dogmatic slumber'. A few thoughts:

  1. Humanity is an 'interactive kind' (to use Hacking's term). Thinking about humanity can change humanity, and the human future.
  2. Therefore, Ord's 'Long Reflection' could lead to there being no future humans at all (if that was the course that the Long Reflection concluded). 
  3. This simple example shows that we cannot quantify over future humans, quadrillions or otherwise, or make long term assumptions about their value. 
  4. You're right about trends, and in this context the outcomes are tied up with 'human kinds', as humans can respond to predictions and thereby invalidate the prediction. Makes me think of Godfrey-Smith's observation that natural selection has no inertia, change the selective environment and the observable 'trend' towards some adaptation (trend) vanishes. 
  5. Cluelessness seems to be some version of the Socratic Paradox (I know only that I know nothing).
  6. RCTs don't just falsify hypotheses, but also provide evidence for causal inference (in spite of hypotheses!) 
Comment by Matt Boyd on A case against strong longtermism · 2021-07-23T08:20:42.184Z · EA · GW

Hi Vaden, 

I'm a bit late to the party here, I know. But I really enjoyed this post. I thought I'd add my two cents worth. Although I have a long term perspective on risk and mitigation, and have long term sympathies, I don't consider myself a strong longtermist. That said, I wouldn't like to see anyone (eg from policy circles) walk away from this debate with the view that it is not worth investing resources in existential risk mitigation. I'm not saying that's what necessarily comes through, but I think there is important middle ground (and this middle ground may actually instrumentally lead to the outcomes that strong longtermists favour, without the need to accept the strong longtermist position). 

I think it is just obvious that we should care about the welfare of people here and now. However, the worst thing that can happen to people existing now is for all of them to be killed. So it seems clear that funnelling some resources into x-risk mitigation, here and now, is important. And the primary focus should always be those x-risks that are most threatening in the near term (and the target risks will no doubt change with time, eg I would say it is biotechnology in the next 5-10 years, then perhaps climate or nuclear, and then AI, followed by rarer natural risks, or emerging technological risks, etc while all the while building cross-cutting defences such as institutions and resilience). As you note, every generation becomes the present generation and every x-risk will have it's time. We can't ignore future x-risks, for this very reason. Each future risk 'era' will become present and we had better be ready. So resources should be invested in future x-risks, or at least in understanding their timing. 

The issue I have with strong-longtermism lies in the utility calculations. The Greaves/MacAskill paper presents a table of future human lives that is based on the carrying capacity of the Earth, solar system, etc. However, even here today we do not advocate some imperative that humans must reproduce right up to the carrying capacity of the Earth. In fact many of us think this would be wrong for a number of reasons. To factor 'quadrillions' or any definite number at all into the calculations is to miss the point that we (the moral agents) get to determine (morally speaking) the right number of future people, and we might not know how many this is yet. Uncertainty about moral progress means that we cannot know what the morally correct number is, because theory and argument might evolve across time (and yes, it's probably obvious but I don't accept that non-actual, and never-actual people can be harmed, and I don't accept that non-existence is a harm). 

However, there seems to be value in SOME humans persisting in order that these projects might be continued and hopefully resolved. Therefore, I don't think we should be putting speculative utilities into our 'in expectation' calculations. There are independent arguments for preventing x-risk than strong-longtermism, and the emotional response it generates from many, potentially including aversive policymakers makes it a risky strategy to push. Even if EA is to be motivated by strong-longtermism, it may be useful to advocate an 'instrumental' theory of value in order to achieve the strong-longtermist agenda. There is a possibility that some of EA's views can themselves be an information hazard. Being right is not always being effective, and therefore not always altruistic. 

**

Comment by Matt Boyd on Is SARS-CoV-2 a modern Greek Tragedy? · 2021-05-10T23:30:29.432Z · EA · GW

Thanks for this response. I guess the motivation for me writing this yesterday was a comment from a member of NZ's public sector, who said basically 'the Atomic Scientists article falls afoul of the principle of parsimony'. So I wanted to give the other side, ie there actually are some reasons to think lab-leak rather than parsimonious natural explanation. So I completely take your point about balance, but the idea is part of a dialogue rather than a comprehensive analysis, that could have been clearer. Cheers. 

Comment by Matt Boyd on Is SARS-CoV-2 a modern Greek Tragedy? · 2021-05-10T21:28:38.809Z · EA · GW

Thanks for these. Super interesting credences here, 19% (that health organisations will conclude lab origin) to 83% (that gain of function was in fact contributory). I guess the strikingly wide range suggests genuine uncertainty. Watch this space with interest. 

Comment by Matt Boyd on Are Humans 'Human Compatible'? · 2019-12-06T20:20:43.292Z · EA · GW

Great additional detail, thanks!

Comment by Matt Boyd on Eight high-level uncertainties about global catastrophic and existential risk · 2019-12-05T08:52:00.146Z · EA · GW

Another one to consider, assuming you see it at the same level of analysis as the 8 above, is the spatial trajectory through which the catastrophe unfolds. E.g. a pandemic will spread from an origin(s) and I'm guessing is statistically likely to impact certain well-connected regions of the world first. Or a lethal command to a robot army will radiate outward from the storage facility for the army. Or nuclear winter will impact certain regions sooner than others. Or Ecological collapse due to an unstoppable biological novelty will devour certain kinds of environment more quickly (same possibly for grey goo), etc. There may be systematic regularities to which spaces on Earth are affected and when. Currently completely unknown. But knowledge of these patterns could help target certain kinds of resilience and mitigation measures to where they are likely to have time to succeed before themselves being impacted.