[Linkpost] Michael Huemer on the case for Bayesian statistics 2023-02-07T17:52:17.253Z
A Letter to the Bulletin of Atomic Scientists 2022-11-23T20:06:00.367Z
Climate Change & Longtermism: new book-length report 2022-08-26T09:13:12.129Z
Should we buy coal mines? 2022-05-04T07:28:33.057Z
A review of Our Final Warning: Six Degrees of Climate Emergency by Mark Lynas 2022-04-15T13:43:19.026Z
Are we going to run out of phosphorous? 2021-12-07T14:09:54.454Z
Good news on climate change 2021-10-28T14:04:00.848Z
Economic policy in poor countries 2021-08-07T15:16:59.850Z
How well did EA-funded biorisk organisations do on Covid? 2021-06-02T17:25:41.175Z
Deference for Bayesians 2021-02-13T12:33:05.556Z
[Link post] Are we approaching the singularity? 2021-02-13T11:04:02.579Z
How modest should you be? 2020-12-28T17:47:10.799Z
Instructions on potential insomnia cure 2020-10-12T13:56:53.111Z
High stakes instrumentalism and billionaire philanthropy 2020-07-19T19:19:41.206Z
What is a good donor advised fund for small UK donors? 2020-04-29T09:56:50.097Z
How hot will it get? 2020-04-18T20:29:59.579Z
Pangea: The Worst of Times 2020-04-05T15:13:23.612Z
Covid-19 Response Fund 2020-03-31T17:22:05.999Z
Growth and the case against randomista development 2020-01-16T10:11:51.136Z
Is mindfulness good for you? 2019-12-29T20:01:28.762Z
The ITN framework, cost-effectiveness, and cause prioritisation 2019-10-06T05:26:24.879Z
What should Founders Pledge research? 2019-09-09T17:41:04.073Z
[Link] New Founders Pledge report on existential risk 2019-03-28T11:46:17.623Z
The case for delaying solar geoengineering research 2019-03-23T15:26:13.119Z
Insomnia: a promising cure 2018-11-16T18:33:28.060Z
Concerns with ACE research 2018-09-07T14:56:25.737Z
New research on effective climate charities 2018-07-11T13:51:23.354Z
The counterfactual impact of agents acting in concert 2018-05-27T10:54:03.677Z
Climate change, geoengineering, and existential risk 2018-03-20T10:48:01.316Z
Economics, prioritisation, and pro-rich bias   2018-01-02T22:33:36.355Z
We're hiring! Founders Pledge is seeking a new researcher 2017-12-18T12:30:02.429Z
Capitalism and Selfishness 2017-09-15T08:30:54.508Z
How should we assess very uncertain and non-testable stuff? 2017-08-17T13:24:44.537Z
Where should anti-paternalists donate? 2017-05-04T09:36:53.654Z
The asymmetry and the far future 2017-03-09T22:05:26.700Z


Comment by John G. Halstead (Halstead) on EA's weirdness makes it unusually susceptible to bad behavior · 2023-02-06T07:36:33.924Z · EA · GW

I strongly agree with a lot of your points here. To pick up on one strand you highlight, I think the fact that EA is very nerdy, and lacking 'street smarts' has been at the root of some (but not all) of the problems we've been seeing. I think it might be this rather than an intellectual commitment to assume good faith and tolerate weirdness that is the main issue, though maybe the first causes the second. Specifically, EAs seem to have been pretty naive in dealing with bad actors over the last few years and that persists to this day. 

If the problem is lack of street smarts, then we don't need to get into debates about being less weird because it's kind of unclear what it means, and hard to judge what margin of weirdness you want to move, which makes general debates about weirdness difficult. But it's pretty clear that we need to be more street smart. 

Comment by John G. Halstead (Halstead) on Possible changes to EA, a big upvoted list · 2023-02-03T13:52:46.692Z · EA · GW

do the votes mean that it would be undemocratic to impose democratic rule?

Comment by John G. Halstead (Halstead) on Doing EA Better · 2023-01-23T19:43:18.527Z · EA · GW

Thanks for the detailed response. 

I agree that we don't want EA to be distinctive just for the sake of it. My view is that many of the elements of EA that make it distinctive have good reasons behind them. I agree that some changes in governance of EA orgs, moving more in the direction of standard organisational governance, would be good, though probably I think they would be quite different to what you propose and certainly wouldn't be 'democratic' in any meaningful sense. 

  1. I don't have much to add to my first point and to the discussion below my comment by Michael PJ. Boiled down, I think the point that Cowen makes stripped of the rhetoric is just that EAs did a bad job on the governance and management of risks involved in working with SBF and FTX, which is very obvious and everyone already agrees with. It simply has no bearing on whether EAs are assessing existential risk correctly, and enormous equivocation on the word 'existential risk' doesn't change that fact. 
  2. Since you don't want diversity essentially along all dimensions, what sort of diversity would you like? You don't want Trump supporters; do you want more Marxists? You apparently don't want more right wingers even though most EAs already lean left. Am I right in thinking that you want diversity only insofar as it makes EA more left wing? What forms of right wing representation would you like to increase. 
  3. The problem you highlight here is not value alignment as such but value alignment on what you think are the wrong focus areas. Your argument implies that value alignment on non-TUA things would be good. Correspondingly, if what you call 'TUA' (which I think is a bit of a silly label - how is it techno-utopian to think we're all going to be killed by technology?) is actually good, then value alignment on it seems good. 
  4. You argued in your post that people often have to publish pseudonymously for fear of censure or loss of funding and the examples you have given are (1) your own post, and (2) a forum post on conflicts of interest. It's somewhat self-fulfilling to publish something pseudonymously and then use that as an argument that people have to publish things pseudonymously.  I don't think it was rational for you to publish the post pseudonymously - I don't think you will face censure if you present rational arguments, and you will have to tell people what you actually think about the world eventually anyway. (btw I'm not a researcher at a core EA org any more.)
    1. I don't think the seniority argument works here. A couple of examples spring to mind here. Leopold Aschenbrenner wrote a critique of EA views on economic growth, for which we was richly rewarded despite being a teenager (or whatever). The recent post about AI timelines and interest rates got a lot of support, even though it criticises a lot of EA research on timelines. I hadn't heard of any of the authors of the interest rate piece before. 
    2. The main example you give is the reception to the Cremer and Kemp pice, but I haven't seen any evidence that they did actually get the reception they claimed. 
  5. I'm not sure whether intelligence can be boiled down to a single number if this claim is interpreted in the  most extreme way. But at least the single number of the g factor conveys a lot of information about how intelligent people are and explains about 40-50% of the variation in individual performance on any given cognitive task, a large correlation for psychological science! This widely cited recent review states "There is new research on the psychometric structure of intelligence. The g factor from different test batteries ranks people in the same way. There is still debate about the number of levels at which the variations in intelligence is best described. There is still little empirical support for an account of intelligence differences that does not include g."
    1.  "In fact, this could be argued to represent the sort of ideologically-agreeable overconfidence we warn of with respect to EAs discussing subjects in which they have no expertise." I don't think this gambit is open to you - your post is so wide ranging that I think it unlikely that you all have expertise in all the topics covered in the post, ten authors notwithstanding. 
    2. Of course, there are more things to life and to performance at work than intelligence. 
  6. As I mentioned in my first comment, it's not true that the things that EAs are interested in are especially popular among tech types, nor are they aligned with the interests of tech types. The vast majority of tech philanthropists are not EA, and EA cause areas just don't help tech people at least relative to everyone else in the world. In fact, I suspect a majority view is that most EAs would like progress in virology and AI to be slowed down if not stopped. This is actively bad for the interests of people invested in AI companies and biotech. "the fact that e.g. preventing wars does not disproportionately appeal to the ultra-wealthy is orthogonal." One of the headings in your article is "We align suspiciously well with the interests of tech billionaires (and ourselves)". I don't see how anything you have said here is a good defence against my criticism of that claim.
  7. There's a few things to separate here. One worry is that EAs/me are neglecting the expert consensus on the aggregate costs of climate change: this is emphatically not true. The only models that actually try and quantify the costs of climate change all suggest that income per person will be higher in 2100 despite climate change. From memory, the most pessimistic study, which is a massive outlier (Burke et al), projects a median case of a ~750% increase  in income per person by 2100, with a lower 5% probability of a ~400% increase, on a 5ºC scenario. 
    1. A lot of what you say in your response and in your article seems inconsistent - you make a  point of saying that EAs ignore the experts but then dismiss the experts when that happens to be  inconsistent with your preferred opinions. Examples:
      1. Defending postcolonialism in global development 
      2. Your explanation of why Walmart makes money vs mainstream economics.
      3. Your dismissal of all climate economics and the IPCC
      4. 'Standpoint theory' vs  analytical philosophy
      5. Your dismissal of Bayesianism, which doesn't seem to be aware of any of the main arguments for Bayesianism. 
      6. Your dismissal of the g factor, which doesn't seem to be aware of the literature in psychology. 
      7. The claim that we need to take on board Kuhnian philosophy of science (Kuhn believed that there has been zero improvement in scientific knowledge over the last 500 years)
      8. Your defence of critical realism 
      9. Similarly, Cremer (life science and psychology) and Kemp (international relations) take Ord, MacAskill and Bostrom to task for straying out of their epistemic lane and having poor epistemics, but then go on in the same paper to offer casual ~1 page refutations of (amongst other things) total utlitarianism, longtermism and expected utility theory.
    2. Your discussion of why climate change is a serious catastrophic risk kind of illustrates the point. "For instance, recent work on catastrophic climate risk highlights the key role of cascading effects like societal collapses and resource conflicts. With as many as half of climate tipping points in play at 2.7°C - 3.4°C of warming and several at as low as 1.5°C, large areas of the Earth are likely to face prolonged lethal heat conditions, with innumerable knock-on effects. These could include increased interstate conflict, a far greater number of omnicidal actors, food-system strain or failure triggering societal collapses, and long-term degradation of the biosphere carrying unforeseen long-term damage e.g. through keystone species loss." 
      1. Bressler et al (2021) model the effects of ~3ºC on mortality and find that it increases the global mortality rate by 1%, on some very pessimistic assumptions about socioeconomic development and adaptation. It's kind of true but a bit misleading to say that this 'could'  lead to interstate conflict or omnicidal actors. Maybe so, but how big a driver is it? I would have thought that more omnicidal actors will be created by the increasing popularity of environmentalism. The only people who I have heard say things like "humanity is a virus" are environmentalists.
      2. Can you point me to the studies involving formal models that suggest that there will be global food system collapse at 3-4ºC of warming? I know that people like Lenton and Rockstrom say this will happen but they don't actually produce any quantitative evidence and it's completely implausible on its face if you just think about what a 3ºC world would be like. Economic models include  effects on agriculture and they find a ~5% counterfactual reduction in GDP by 2100 for warming of 5ºC. There's nothing missing in not modelling the tails here. 
  8. ok
  9. What is the rationale for democratising? Is it for the sake of the intrinsic value of democracy or for producing better spending decisions? I agree it would be more democratic to have all EAs make the decision than the current system, but it's still not very democratic - as you have pointed out, it would be a load of socially awkward anglophone white male nerds deciding on a lot of money. Why not go the whole hog and have everyone in the world decide on the money, which you could perhaps roughly approximate by giving it to the UN or something? 
    1. We could experiment with setting up one of the EA funds to be run democratically by all EAs (however we choose to assign EA status) and see whether people want to donate to it. Then we would get some sort of signal about how it performs and whether people think this is a good idea. I know I wouldn't give it money, and I doubt Moskovitz would either. I'm not sure what your proposal is for what we're supposed to do after this happens. 
  10. I actually think corporations are involved in collaborative mission-driven work,  and your Mondragon example seems to grant this, though perhaps you are understanding 'mission' differently to me. The vast majority of organisations trying to achieve a particular goal are corporations, which are not run democratically. Most charities are also not run democratically. There is a reason for this. You explicitly said "Worker self-management has been shown to be effective, durable, and naturally better suited to collaborative, mission-oriented work than traditional top-down rule". The problems of worker self-management are well-documented, with one of the key downsides being that it creates a disincentive to expand, which would also be true if EA democratised: doing so would only dilute each person's influence over funding decisions. Another obvious downside is division of labour and specialisation, i.e. you would empower people without the time, inclination or ability to lead or make key decisions. 

"Finally, we are not sure why you are so keen to repeatedly apply the term “left wing environmentalism”. Few of us identify with this label, and the vast majority of our claims are unrelated to it." Evidently from the comments I'm not the only one who picked up on this vibe. How many of the authors identify as right wing? In the post, you endorse a range of ideas associated with the left including: an emphasis on  identity diversity; climate change and biodiversity loss as the primary risk to humanity; postcolonial theory; Marxist philosophy and its offshoots; postmodernist philosophy and related ideas; funding decisions should be democratised; and finally the need for EA to have more left wing people, which I take it was the implication of your response to my comment. 

If you had spent the post talking about free markets,  economic growth and admonishing the woke, I think people would have taken away a different message, but you didn't do that because I doubt you believe it. I think it is is important to be clear and transparent about what your main aims are. As I have explained, I don't think you actually endorse some of the meta-level epistemic positions that you defend in the article. Even though the median EA is left wing, you don't want more right wing people.  At bottom, I think what you are arguing for is for EA to take on a substantive left wing environmentalist position. One of the things that I like about EA is that it is focused on doing the most good without political bias. I worry that your proposals would destroy much of what makes EA good. 

Comment by John G. Halstead (Halstead) on FLI FAQ on the rejected grant proposal controversy · 2023-01-19T21:53:02.218Z · EA · GW

I see. I wasn't being provocative with my question, I just didn't get it

Comment by John G. Halstead (Halstead) on Doing EA Better · 2023-01-19T19:20:20.394Z · EA · GW

You should probably take out the claim that FLI offered 100k to a neo nazi group as it doesn't seem to be true

Comment by John G. Halstead (Halstead) on FLI FAQ on the rejected grant proposal controversy · 2023-01-19T18:02:44.695Z · EA · GW

I'm somewhat confused as to why this is controversial. Why is it news that FLI didn't make a grant to a far right org?

Comment by John G. Halstead (Halstead) on Doing EA Better · 2023-01-18T18:52:04.189Z · EA · GW

I appreciate you taking the effort to write this. However, like other commentators I feel that if these proposals were implemented, EA would just become the same as many other left wing social movements, and, as far as I can tell, would basically become the same as standard forms of left wing environmentalism which are already a live option for people with this type of outlook, and get far more resources than EA ever has. I also think many of the proposals here have been rejected for good reason, and that some of the key arguments are  weak. 

  1. You begin by citing the Cowen quote that "EAs couldn't see the existential risk to FTX even though they focus on existential risk". I think this is one of the more daft points made by  a serious person on the FTX crash. Although the words 'existential risk' are the same here, they have completely different meanings, one being about the extinction of all humanity or things roughly as bad as that, and the other being about risks to a particular organisation. The problem with FTX is that there wasn't enough attention to existential risks to FTX and the implications this would have for EA.  In contrast, EAs have put umpteen person hours into assessing existential risks to humanity and the epistemic standards used to do that are completely different to those used to assess FTX. 
  2. You cite research purporting to show that diversity of some form is good for collective epistemics and general performance. I haven't read the book that you cite, but I have looked into some of this literature, and as one might expect for a topic that is so politically charged, a lot of the literature is not good, and some of the literature actually points in the opposite direction, even though it is career suicide to criticise diversity, and there are likely personal costs even for me discussing counter-arguments here.  For example, this paper suggests that group performance is mainly determined by the individual intelligence of the group members not by things like gender diversity. This paper lists various costs of team diversity that are bad for collective dynamics. You say that diversity "essentially along all dimensions" is good for epistemics. This is the sort of claim that sounds good, but also seems to be clearly false. I seldom see people who make this argument suggest that we need more Trump supporters, religious fundamentalists, homophobes or people without formal education in order to improve our performance as a community. These are all huge chunks of the national/global community but also massively underrepresented in EA. There are lots of communities that are much more diverse than EA but which also seem to have far worse epistemics than EA. Examples include Catholicism, Trumpism, environmentalism, support of Bolsonaro/Modi etc.
  3. Relatedly, I think value alignment is very important. I have worked in organisations with a mix of EA and non EA people and it definitely made things much harder than if everyone were aligned, holding other things equal. On one level, it is not surprising that a movement trying to achieve something would agree not just at a very abstract level, but also about many concrete things about the world. If I think that stopping AI progress is good and you think it is bad, it is going to be much harder (though not impossible, per moral trade) for us to achieve things in the world. Same for speeding up progress in virus synthesis. The 80,000 Hours articles on goal directed groups are very good on this. 
  4. I don't agree that EA is hostile to criticism. In fact it seems unusually open to criticism, and rational discussion of ideas rather than dismissing them on the basis of vibe/mood affiliation/political amenability. Aside from the controversial Cremer and Kemp case (who didn't publish pseudonymously) what are the major critiques that have been presented pseudonymously or have caused serious personal consequences for the critics? By your definition, I think my critique of GiveWell counts as deep, but I have been  rewarded for this because people thought the arguments were good. To stress, mine and Hauke's claim was that most of the money EA has spent has been highly suboptimal. 
  5. You say "For instance, (intellectual) ability is implicitly assumed within much of EA to be a single variable[32], which is simply higher or lower for different people." This isn't just an assumption of EA, but a central finding of psychological science that things that are usually classed as intellectual abilities are strongly correlated - the g factor. eg maths ability is correlated with social science ability, and english literature ability etc. 
  6. I just don't think it is true that we align well with the interests of tech billionaires. We've managed to persuade two billionaires of EA and one believed in EA before he became a billionaire. The vast majority of billionaires evidently don't buy it and go off and do their own thing, mainly donating to things that sound good in their country, to climate change, or not donating at all. Longtermist EAs would like lots more money to spent on AI alignment, slowing down AI progress, on slowing down progress in virology or increasing spending on counter-measures, and on preventing major wars. I don't see how any of these things promise to benefit tech founders as a particular constituency in any meaningful way. That being said, I agree that there is a problem with rich people becoming spokespeople for the community or overly determining what gets done and we need far better systems to protect against that in future. eg FTX suddenly deciding to do all this political stuff was a big break from previous wisdom and wasn't questioned enough. 
  7. On  a personal note, I get that I am a non-expert in climate, and so am wide open to criticism as an interloper (though I have published a paper on climate change). But then it is also true that getting climate people to think in EA terms is very difficult. Also, the view I recently outlined is basically in line with all climate economics. In that sense the view I hold and I think is widely held in longtermist EA is in line with one expert consensus. Indeed, it is striking that this is the one group that actually tries to quantify the aggregate costs of climate change. I also don't think there are any areas where I disagree with the line taken by the IPCC which is supposed to express the expert consensus on climate. The view that 4ºC is going to kill everyone is one held by some activists and a small number of scientists. Either way we need to explain why we are ignoring all the climate economists and listening to Rockstrom/Lenton instead. On planetary boundaries, as far as I know, I am the only EA to have criticised planetary boundaries, and I don't dismiss it in passing, but in fact at considerable length. The reviewer I had for that section is a prof and strongly agreed with me.
  8. Differential tech progress has been subject to peer review. The Bostrom articles on it are peer reviewed. 
  9. The implications of  democratising EA are mindboggling. Suppose that Open Phil's spending decisions are made democratically by EAs. This would put EAs in charge of ~$10bn. We'd then need to decide who counts as an EA. Because so much money would be on the table, lots of people who we wouldn't class as EAs would want a say, and it would be undemocratic to exclude them (I assume). So, the 'EA franchise' would expand to anyone who wants a say (?) I don't know where the money would end up after all this, but it's fair to say that money spent on reducing engineered pandemics, AI and farm animal welfare would fall from the current pitiful sum to close to zero. 
  10.  You say that worker self-management has been proven to be better for mission-oriented work than top-down rule. This is clearly false. There is a tiny pocket of worker cooperatives (eg in the Basque region) who have been fairly successful. But almost all companies are run oligarchially in a top-down fashion by boards or leadership groups. 

Overall, we need to learn hard lessons from the FTX debacle. But thus far, the collapse has mainly been used to argue for things that are completely unrelated to FTX, and mainly to an advance an agenda that has been disfavoured in EA so far, and with good reason. For Cowen, this was neoliberal progress, here it is left wing environmentalism. 

Comment by John G. Halstead (Halstead) on AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years · 2023-01-13T09:38:04.838Z · EA · GW

What do you make of the 'impatient philanthropy' argument? Do you think EAs should be borrowing to spend on AI safety?

Comment by John G. Halstead (Halstead) on AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years · 2023-01-13T09:32:21.935Z · EA · GW

The claim in the post (which I think is very good) is that we should have a pretty strong prior against anything which requires positing massive market inefficiency on any randomly selected proposition where there is lots of money money on the table. This suggests that you should update away from very short timelines. There's no assumption that markets are a "mystical source of information" just that if you bet against them you almost always lose. 

There's also a nice "put your money where you mouth is" takeaway from the post, which AFAIK few short timelines people are  doing. 

Comment by John G. Halstead (Halstead) on Beyond Simple Existential Risk: Survival in a Complex Interconnected World · 2023-01-06T20:30:44.247Z · EA · GW
  • I'm not sure they're middle of the road on civilisational vulnerability. It would be pretty surprising if extreme weather events made a big difference to the overall picture. For the kinds of extreme weather events one sees in the literature, it's just not a big influence on global GDP. How bad would a hurricane or flood have to be to push things from 'counterfactual GDP reduction of 5%' to civilisational collapse. 
  • I don't  think they fully discount/ignore the possibility of catastrophe 3/4ºC. In part this is just an outcome of the models and of the scientific literature. There are no impacts that come close to catastrophe in the scientific literature for 3/4ºC. I agree they miss some tipping points, but looking at the scientific literature on that, it's hard to see how it would make a big difference to the overall picture. 
  • I haven't read those papers and don't have time to do so now unfortunately. My argument there doesn't rely on one study but on a range of studies in the literature for different warm periods. The Permian was a very extreme and unusual case because it caused such massive land-based extinctions, which was caused by the release of halogens, which is not relevant to future climate change. Also, both the Permian and PETM were extremely hot relative to what we now seem to be in for (17ºC vs 2.5ºC). 
  • I'm not sure I see how I am not engaging with you on planetary boundaries. I thought we were disagreeing about whether to put weight on planetary boundaries, and I was arguing that the boundaries just seem made up. Using EV may have its own problems but that doesn't make planetary boundaries valid. 
  • I don't really see how the world now is more vulnerable to any form of weather events in any respect than it has been at any other point in human history. Society routinely absorbs large bad weather events; they don't even cause local civilisational collapse any more (in middle and high income countries). Deaths from weather disasters have declined dramatically over the last 100 or so years, which is pretty strong evidence that societal resilience is increasing not decreasing.  In the pre-industrial period, all countries suffered turmoil and hunger due to cold and droughts. This doesn't happen any more in countries that are sufficiently wealthy. Many countries now suffer drought, almost entirely due to implicit subsidies for agricultural water consumption. It is very hard to see how this could lead to eg to collapse in California or Spain. 
  • Can you set out an example of a cascading causal process that would lead to a catastrophe? 
  • I'm not sure that there is some meta-level epistemic disagreement, I think we just disagree about what the evidence says about the impacts of climate change.  In 2016, I was much more worried than the average FHI person about climate change, but after looking at the impacts literature and recent changes in likely emissions, I updated towards climate change being a relatively minor risk. Comparing to bio for instance, after reading about trends in gene synthesis technologies and costs, it takes about 30 minutes to see how it poses a major global catastrophic risk in the coming decades. I've been researching climate change  for six years and struggle to see it. I am not being facetious here, this is my honest take.
Comment by John G. Halstead (Halstead) on Beyond Simple Existential Risk: Survival in a Complex Interconnected World · 2022-12-13T16:35:13.156Z · EA · GW
  • I agree that climate-economy models aren't good at some types of extremes, but I think there are different versions of this argument, some of which have become weaker over the years. One of Weitzman's points was that there was a decidedly non-negligible chance of more than 6ºC and our economic models weren't good at capturing how bad this would be and so tended to underestimate climate risk. I think this was basically right at the time he was writing. But since 5ºC now looks less and less likely,  this critique has less and less bite. Because there is such a huge literature on the impact of 5ºC, the models now in principle have a much firmer foundation for damage estimates. eg the Takakura 2019 paper that I go on about in the report uses up to date literature on a wide range of impact channels, but still only gets like a 5% counterfactual reduction in welfare-equivalent of GDP by 2100, and so probably higher average living standards than today. 
    • Another version of this is that the models aren't good at capturing tipping points. I agree with this, but I also find it difficult to see how this would make a dramatic difference to the damage estimates if you actually drill down into the literature on the impact of different tipping points. Tipping points that might cause different levels of warming are not relevant to damage estimates, so the main ones that seem relevant are ice sheet collapse, regional precipitation and temperature changes, such as changes in monsoons, which might be caused eg by collapse of the AMOC. For the impacts discussed in the literature, it is difficult to see how you get anywhere close to an existential catastrophe if any of these things happen. 
    • Aside from that, it is noteworthy that some economic models actually try to capture the literature on the  impact of warming of 5ºC on things like agriculture, sea level rise, temperature-related deaths, lost productivity from heat etc. There is a group of scientists who say that 3ºC/4ºC is catastrophic on the basis of what the scientific literature says about these impacts. The models strongly suggest that they are wrong, and it is not clear what their response is.
    • All this being said, I am sympathetic to some critiques of the economic models, eg a lot of the Nordhaus stuff. When I was writing the report, I had thought about putting no weight on them at all, but after digging a bit I changed my mind. I think some of the models make a decent stab at quantifying aggregate costs. 
  • I agree that climate changes have contributed at least to some civilisational trauma throughout history. The literature on this suggests that climate change has been correlated with local civilisational trauma. But: (a) local collapse is a far cry from global collapse; (b)  most of the time this was due to cooling rather than warming; (c) the mechanism was usually damage to agricultural output, but there is now far more slack in the system, and we have massively better technology to deal with any disruption; (d) we in general have far more advanced technology, and whereas in the past >90% of the workforce would have been employed in agriculture, now <20% is (or whatever); (e) the relationship between climate change and civilisational turmoil breaks down by the industrial revolution, which provides some support for point (c). 
  • The paleoclimate point doesn't rely on one datapoint: it's data from 160 million years of climatic and evolutionary history. Massive climate change over that period didn't cause species extinctions, as some might have expect it to have done. 
    • As you say, with climate change, the extinctions usually happened among marine life, due to ocean anoxia and  ocean acidification, and it's hard to see the mechanism by which CO2 pollution would cause land-based extinctions, unless something else weird happens at the time, such as a volcanic eruption puncturing though salt deposits as happened at the Permian. 
    • For the level of warming that now looks likely of 2-4ºC, it's really hard to see why it would cause similar  damage eg to the Permian, given that the effect is an order of magnitude smaller. 
  • I don't think they are quasi-arbitrary, they are totally arbitrary. eg they propose a planetary boundary for biodiversity intactness which by their own admission is made up. The boundary also can't be real since various countries across Eurasia  completely destroyed their pre-modern ecosystems after the agricultural revolution without causing anything like civilisational collapse. 
    • A lot of people criticise planetary boundaries for being political advocacy. The clearest evidence for this is Steffen et al proposing a supposed planetary boundary for a 'Hothouse earth' at 2ºC (which happens to be the Paris target) on the basis of no argument.
    • When we are acting under uncertainty I think we should use expected value. Alleged boundaries might be a useful schelling point for political negotiation (like the 2ºC threshold), but it's not a good approach for actually quantifying risk. Another downside of a boundary is that it implies that anything we do once we pass the boundary is pointless. 
  • Kemp, Jehn and others claim that the effect of warming of more than 3ºC is 'severely neglected'. But all of the impacts literature explores the effect of rcp8.5 by 2100, which implies 4-5ºC of warming. Jehn's search strategy uses temperature mentions to measure neglect, but if you use RCP mentions, you don't get the same result. 
  • My argument here was that I think your argument proves too much - it suggests that the world is extremely fragile to eg agricultural disruption and heat waves that happen all the time. Given that the world was eg a lot poorer in 1980 and so had a lot lower adaptive capacity, why didn't various weather disasters trigger cascading catastrophes back then? The number of people dying in weather-related disasters has declined massively over time, so we should expect the cascade to have happened in the 1920s and less so in the future?
    •  I also don't see why cascading risk would change the cause ranking among top causes. Why aren't democratised bioweapons and AI also cascading risks? 
  • What are the causal pathways that might contribute to conflict risk that you think I have missed? I don't really get what is meant to happen that I haven't already discussed. I talk about all of the contributors to war outlined in textbooks about war and combine that with the literature on climate impacts. It is just really a stretch to make it an important contributor to US-China dynamics. 
Comment by John G. Halstead (Halstead) on Why did CEA buy Wytham Abbey? · 2022-12-06T20:02:06.982Z · EA · GW

side point on a pet peeve. Raw house price increases don't account for the cost of improvements and renovations and the effect they might have on the value of property. eg Some houses might have gained in value because the owners added a bedroom

Comment by John G. Halstead (Halstead) on Beyond Simple Existential Risk: Survival in a Complex Interconnected World · 2022-11-30T10:55:29.528Z · EA · GW

Thanks yes that is helpful. Perhaps we can now get into the substance. 

  • It is noteworthy how different your estimates of the x-risk of climate change are to all other published attempts to quantify the aggregate costs of climate change. All climate-economy models imply not just that climate change won't cause an existential catastrophe, but that average living standards will be higher in the future despite climate change. When people try to actually quantify and add up the effect on things like agriculture, sea level rise and so on, they don't get anywhere near to civilisational collapse, but instead get a counterfactual reduction in GDP on the order of 1-5% relative to a world with no climate change (not relative to today). 
  • I don't think past precedent can take us very far here, since there are no precedents of climate change causing human extinction, though anthropics is obviously an issue here. In the report, I also discuss how in the last 160 million years, climate change has not been associated with elevated rates of species loss. Humans also survive and thrive in very diverse environmental niches at the moment, with an annual average temperature of 10ºC in the UK, but closer to 25ºC in South Asia. Within this annual average, there is also substantial diurnal and seasonal variation. It's around 5ºC in the UK now but will reach 20ºC in the summer. Humans have survived dramatic climate change over the last 300,000 years, and our hominid ancestors also survived when the world was about 4ºC warmer. It's hard to see why climate change of 2-4ºC would make such a massive difference, so as to constitute an existential catastrophe
  • I disagree about planetary boundaries for reasons I discuss in the report. I have examined several of the boundaries in depth and they just seem to be completely made up. 
  • It is not true that there is a small amount of research on the tails of warming. Business as usual is now agreed to be 2.5ºC with something like a 1-5% chance of 4ºC. The impacts literature has in fact been heavily criticised for focusing too much on the impacts of RCP8.5, which implies 5ºC by 2100. 
  • The approach that you advocate for seems to me to establish not just that climate change is a much bigger risk than commonly recognised but also that many other problems are as well. Other problems also have similar or larger effects to climate change when calculated in the usual way used in economic analysis. This includes things like mispricing of water, immigration restrictions, antimicrobial resistance, underinvestment in vaccines, a lot of things that affect the media, the prohibition of GM food, underinvestment in R&D, bad monetary policy, economists focusing on RCTs, housing regulation, the drug war etc. If climate change is a cascading risk on the order of 0.01pp to 1pp, then these problems should be as well. But if they are as well, then total existential risk from non-AI and non-bio sources is way way higher than commonly recognised and doom is almost certain. The reasoning suggests that the world is so fragile that it is unlikely that we could even have got to the current level of technological development. 
  • I would view a lot of my report as assessing cascading risk. I discuss pathways such as climate change => civil conflict => political instability => interstate war. I also discuss effects on migration and the spillover effects this might have. What difference would a cascading risk approach take here? Related to this, I don't view causal chains like this as very understandable and I say so in the report. But we still have ideas about how big effects some things have. The causes of war between the US and China or Russia and China 
Comment by John G. Halstead (Halstead) on A Letter to the Bulletin of Atomic Scientists · 2022-11-25T07:08:26.784Z · EA · GW

Yeah I can see that perspective. The aim here was more to point out malfeasance on the part of the Bulletin rather than Torres.  I would have expected a lot better from the Bulletin

Comment by John G. Halstead (Halstead) on A Letter to the Bulletin of Atomic Scientists · 2022-11-24T20:24:16.178Z · EA · GW

Torres is now fully they/them I think

Comment by John G. Halstead (Halstead) on A Letter to the Bulletin of Atomic Scientists · 2022-11-24T19:44:26.395Z · EA · GW

that is peter watson not andrew watson. both were contacted and provided feedback

Comment by John G. Halstead (Halstead) on A Letter to the Bulletin of Atomic Scientists · 2022-11-24T11:29:52.112Z · EA · GW

On your first question, several people Torres contacted reached out to us to tell us that they had a weird email from Torres, which made some of them uncomfortable. In some of the emails Torres repeated the false claim that we had essentially fabricated the acknowledgements. 

Second, they annotated the article, but then tweeted the false claim! The claim should not be allowed to stay in the piece once they have checked it and found out that it is false. It's basic journalism. If they think we lied to them about consulting the experts, then they should say so explicitly. If they think the claim is false it is obvious that they should remove since it is an accusation of research misconduct.

Comment by John G. Halstead (Halstead) on Beyond Simple Existential Risk: Survival in a Complex Interconnected World · 2022-11-23T20:22:04.391Z · EA · GW

Thanks for this it is useful. What is your estimate of the existential risk due to climate change? I obviously have it very low, so it would be useful to know where you are at on that.  Could you explain what the main drivers of the risk are, from your point of view? Then we can get into the substance a bit more

Comment by John G. Halstead (Halstead) on Beyond Simple Existential Risk: Survival in a Complex Interconnected World · 2022-11-22T16:56:59.715Z · EA · GW

Thanks for this Gideon. Having read this and your comments on my climate report, I am still not completely sure what the crux of the disagreement is between us. I get that you disagree with my risk estimates, but I don't really understand why. Perhaps we could discuss on here, if you were up for it

Comment by John G. Halstead (Halstead) on Some important questions for the EA Leadership · 2022-11-21T09:45:21.064Z · EA · GW

Isn't the point with the Carrick thing not only that it failed, but that we shouldn't have been doing that kind of thing? It seemed like a pretty big break from previous approaches which were to stay out of politics

Comment by John G. Halstead (Halstead) on The FTX Future Fund team has resigned · 2022-11-12T12:15:24.374Z · EA · GW

Thanks for sharing this nbouscal. How many people did you tell about this at the time?

Comment by John G. Halstead (Halstead) on The FTX Future Fund team has resigned · 2022-11-12T12:11:31.261Z · EA · GW


Comment by John G. Halstead (Halstead) on The FTX Future Fund team has resigned · 2022-11-12T10:38:03.482Z · EA · GW

I think it is very important to understand what was known about SBF's behaviour during the initial Alameda breakup, and for this to be publicly discussed and to understand if any of this disaster was predictable beforehand. I have recently spoken to someone involved who told me that SBF was not just cavalier, but unethical and violated commonsense ethical norms. We really need to understand whether this was known beforehand, and if so learn some very hard lessons. 

It is important to distinguish different types of risk-taking here. (1) There is the kind of risk taking that promises high payoffs but with a high chance of the bet falling to zero, without violating commonsense ethical norms, (2) Risk taking in the sense of being willing to  risk it all secretly violating ethical norms to get more money. One flaw in SBF's thinking seemed to be that risk-neutral altruists should take big risks because the returns can only fall to zero. In fact, the returns can go negative - eg all the people he has stiffed, and all of the damage he has done to EA. 

Comment by John G. Halstead (Halstead) on Lord Martin Rees: an appreciation · 2022-10-26T08:30:28.624Z · EA · GW

He  also provided a blurb for Emile Torres' book, well after Torres said that Nick Bostrom, Will MacAskill, Eliezer Yudkowsky, Toby Ord, Hilary Greaves etc endorse white supremacist ideology and eugenics. 

Comment by Halstead on [deleted post] 2022-09-15T19:54:42.619Z

hi david, this was from before he was banned from the forum but after his beef with me started - this was while he was doing all the white supremacy articles about me, beckstead and others. he had a long-standing dispute with the people mentioned, and independently at the time he was especially annoyed at me for criticising him. I think that is what led him to namecheck me in his allegation.

I hadn't heard of one of the people he was accusing at the time that he wrote the facebook post. I have no idea whether or not the allegations are true, I just don't understand why he involved me in them. 

Comment by Halstead on [deleted post] 2022-09-15T15:16:11.675Z

Given that people are sharing evidence on Torres, I thought I would chime in. I agree it would have been better for the OP to share with Zoe before posting, but I also think working with Torres is a mistake. 

My relationship with Torres started after I criticised something he wrote about Steven Pinker on Facebook - my critique was about 3 sentences. My critique was supported by others in the community, including Will MacAskill. I think this was the start of Torres becoming disenchanted with EA. 

From this point on, he published several now infamous pieces suggesting that I and others in EA support white supremacy. He also sent me numerous messages on Facebook after I had stopped responding. In this Facebook post, Torres inexplicably namechecks me while he is accusing some people of being rapists/paedophiles (their names are redacted)

My whole experience with Torres has been surreal - for one small piece of criticism, he went after me for years. I know he has done the same to others: some people he has gone after have needed counselling, and I think people should take that into account when they interact with Torres. 

For people who are confused that Torres, who wrote a book defending the FHI-house view of x-risk in 2017 and endorsed that view until his review of Pinker in 2019, now thinks EA is so bad, it seems to be because he thinks he faced some rejection by the community. 

Comment by John G. Halstead (Halstead) on Climate Change & Longtermism: new book-length report · 2022-09-09T08:53:58.103Z · EA · GW

I would say that for all of the 'non-EA' reviewers, the review was very extensive, and this was also true of some of the EA reviewers (because they were more pushed for time). The non-EA expert reviewers were also compensated for their review in order to incentivise them to review in depth. 

It is true that I ultimately decided whether or not to publish, so this makes it different to peer review. Someone mentioned to me that some people mean by 'peer review' that the reviewers have to agree for publication to be ok, but this wasn't the case for this report. Though it was reviewed  experts, ultimately I decided whether or not to publish in its final state. 

Comment by John G. Halstead (Halstead) on Climate Change & Longtermism: new book-length report · 2022-09-09T08:49:13.953Z · EA · GW

Thanks for this. Someone else raised some issues with the moist greenhouse bit, and I need to revise. I still think the Ord estimate is too high, but I think the discussion in the report could be crisper. I'll revert back once I've made changes

Comment by John G. Halstead (Halstead) on Climate Change & Longtermism: new book-length report · 2022-09-04T13:12:00.510Z · EA · GW

I'd say the depth of review was similar to peer review yes, though it is true to say that publication was not conditional on the peer reviewers okaying what I had written. As mentioned, the methodology was reviewed, yes. So, this is my view, having taken on significant expert input. 

A natural question is whether my report should be given less weight eg than a peer reviewed paper in a prominent journal. I think as a rule, a good approach is to try start by getting a sense of what the weight of the literature says, and then exploring the substantive arguments made. For the usual reasons, we should expect any randomly selected paper to be false. Papers that make claims far outside the consensus position that get published in prominent journals are especially likely to be false. There is also scope for certain groups of scientists to review one another's papers such that bad literatures can snowball. 

This isn't to say that any random person writing about climate change will be better than a random peer reviewed paper. But I think there are reasons to put more weight on the views of someone who has good epistemics (not saying this is true of me, but one might think it is true of some EA researchers) and also be actually talking about the thing we are interested in - i.e. the longtermist import of climate change. Most papers just aren't focusing on that, but will use similar terminology. e.g. there is a paper by Xu and Ramanathan which says that climate change is an existential risk but uses that term in a completely different way to EAs. 

I will give some examples of the flaws of the traditional peer review process as applied to some papers on the catastrophic side of things. 

  1. A paper that is often brought up in climate catastrophe discussions is Steffen et al (2018) - the 'Hothouse Earth' paper. That paper has now been cited more than 2,000 times. For reasons I discuss in the report, I think it is both surprising that the paper was  published. The IPCC also disagrees with it.

2. The Kemp et al 2022 PNAS paper (also written by many planetary boundaries people) was peer reviewed, but also contains several  errors.

For instance, it says "Yet, there remain reasons for caution. For instance, there is significant uncertainty over key variables such as energy demand and economic growth. Plausibly higher economic growth rates could make RCP8.5 35% more likely (27)." 

The cite here in note (27) is to Christensen et al (2018), which actually says "Our results indicate that there is a greater than 35% probability that emissions concentrations will exceed those assumed in RCP8.5." i.e. their finding is about the percentage point chance of RCP8.5, not about an increase in the relative risk of RCP8.5. 

Another example: "While an ECS below 1.5 °C was essentially ruled out, there remains an 18% probability that ECS could be greater than 4.5 °C (14)." 

The cite here is to the entire WG1 IPCC report (not that useful for checking but that aside...) The latest IPCC report says "a best estimate of equilibrium climate sensitivity of 3°C, with a very likely range of 2°C to 5°C. The likely range [is] 2.5°C to 4°C" The IPCC says "Throughout the WGI report and unless stated otherwise, uncertainty is quantified using 90% uncertainty intervals. The 90% uncertainty interval, reported in square brackets [x to y], is estimated to have a 90% likelihood of covering the value that is being estimated. The range encompasses the median value, and there is an estimated 10% combined likelihood of the value being below the lower end of the range (x) and above its upper end (y). Often, the distribution will be considered symmetric about the corresponding best estimate, but this is not always the case. In this Report, an assessed 90% uncertainty interval is referred to as a ‘very likely range’. Similarly, an assessed 66% uncertainty interval is referred to as a ‘likely range’.

So, the 66% CI is 2.5ºC to 4ºC and the 90% CI is 2ºC-5ºC. If this is symmetric, then this means there is a 17% chance of >4ºC, and a 5% chance of >5ºC. It's unclear whether the distribution is symmetric or not - the IPCC does not say - but if it is then the '18% chance of >4.5ºC' claim in climate endgame is wrong. So, a key claim in that paper  - about the main variable of interest in climate science - cannot be inferred from the given reference. 

3. Jehn et al have published two papers cited in Kemp et al (2022), one of which says that "More likely higher end warming scenarios of 3 °C and above, despite potential catastrophic impacts, are severely neglected." This is just not true, but nevertheless made it through peer review. Almost every single climate impact study reports the impact of 4.4ºC. There is barely a single chart in the entire IPCC impacts report that does not report that. We can perhaps quibble over what 'severely neglected' means, but it  doesn't mean 'shown in every single chart in the IPCC climate impacts book'.  It is surprising that this got through peer review. 


As I have said, these are just single studies. I am consistently impressed by how good the IPCC is at reporting the median view in the literature, given how politicised the whole process must be. 


I also do not think there is any tendency to downplay risks in the climate science literature. If you look at studies on publication bias in climate science, they find that effect sizes in abstracts in climate change papers have a tendency to be significantly inflated relative to the main text. This is especially pronounced in high impact journals. I have also found this from personal experience. Overall, I think in some cases the risks are overstated, in some they are understated, but there is no systematic pattern. 

Probably the best way to examine whether my substantive conclusions are wrong would be to raise some substantive criticisms/carry out a redteam - I would welcome this. I emphasise that if my arguments are correct, then the scale of biorisk is numerous orders of magnitude larger than climate change. 

Comment by John G. Halstead (Halstead) on EA is about maximization, and maximization is perilous · 2022-09-04T10:30:54.045Z · EA · GW

One issue I have with these arguments for pluralism and for sometimes obeying something like common sense morality for its own sake and independent of utilitarian justification is that common sense morality is crazy/impossible to follow in almost all normal decision situations if you think  it's implications through properly.

One argument for this is MacAskill's argument that deontology required paralysis. Every time you leave the house, you foreseeably cause someone to die by changing the flow of traffic. Cars also contribute to air pollution which foreseeably kills people, suggesting that emitting any amount of pollution is impermissible. This violates nonconsequentialist side-constraints. I don't understand how you can give some weight to this type of view. 

 This is not the point that we should follow utilitarian morality when the stakes are high from a utilitarian point of view. 

Comment by John G. Halstead (Halstead) on My take on What We Owe the Future · 2022-09-02T13:44:53.740Z · EA · GW

This is a very thoughtful critique. What do you make of the argument that The Precipice and WWOTF work well together as a partnership that target different markets and could be introduced at different stages as people get into EA?

Comment by John G. Halstead (Halstead) on Systemic Cascading Risks: Relevance in Longtermism & Value Lock-In · 2022-09-02T10:41:21.505Z · EA · GW

"Refugees: ~216 million climate refugees by 2050 (World Bank Groundswell Report) caused by droughts and desertification, sea-level rise, coastal flooding, heat stress, land loss, and disruptions to natural rainfall patterns"

The groundswell report is about voluntary internal migration, so it is not about refugees, who are typically defined as involuntarily displaced people crossing national borders. 

Comment by John G. Halstead (Halstead) on Climate Change & Longtermism: new book-length report · 2022-09-01T13:06:43.216Z · EA · GW

I thought that was what was meant by AGI? I agree that the operationalisation doesn't state that explicitly, but I thought it was implied. Do you think I should change in the report?

Comment by John G. Halstead (Halstead) on Climate Change & Longtermism: new book-length report · 2022-08-29T11:40:33.029Z · EA · GW

Which impacts do you think I have missed? Can you explain why the perspective you take would render any of my substantive conclusions false?

I'm not sure what you're talking about with self-citation. When do I cite myself?

Another way to look at it is to think about the impacts including in climate-economy models. Takakura et al (2019), which is one of the more comprehensive, includes:

  • Fluvial flooding
  • Coastal inundation
  • Agriculture
  • Undernourishment
  • Heat-related excess mortality
  • Cooling/heating demand
  • Occupational-health costs
  • Hydroelectric generation capacity
  • Thermal power generation capacity

I discuss all of those except cooling/heating demand and hydro/thermal generation capacity, as they seem like small factors relative to climate risk. In addition to that, I discuss tipping points, runaway greenhouse effects, crime, civil and interstate conflict, ecosystem collapse. 

Comment by John G. Halstead (Halstead) on Climate Change & Longtermism: new book-length report · 2022-08-28T17:58:16.823Z · EA · GW

Oh sorry, I thought you meant 'did they leave negative comments about these things'. Lots of people looked at the overall report and were free to point out things I missed.

I still don't really understand why you have such an issue with the methodology. I took my methodology to be - pick out all of the things in the climate literature that are relevant to the longtermist import of climate change, review the scientific literature on those things, and then arrive at my own view, send it to reviewers, make some revisions, iterate. 

Comment by John G. Halstead (Halstead) on Climate Change & Longtermism: new book-length report · 2022-08-28T17:53:09.222Z · EA · GW

I think it might help to make this discussion more concrete if you gave an example of what you mean by a cascading risk. It's hard to defend the arguments in the report when I'm not sure what you are saying I have missed in my analysis. I talk about risks to the food system, and the spillover effects that might come from that (eg conflict), I talk about purported effects on crime, I talk about drought, I talk about tipping points etc. What is the casual story you have in mind?

The substantive discussion is the outline all of the various impacts that I have discussed and summarising the literature on economic costs, which tends to find costs of 4ºC are on the order of 5% of GDP. Unless something is radically missing from these analyses, I'm not sure how climate change could make a large difference to the chance of recovery from collapse. 

How about a scenario where a multitude of factors eg climate related damages, civil conflict, interstate conflict, bioweaponary, natural disasters and economic collapse all work in concert with each other? 

I discuss the potential impacts of climate damages, civil conflict, interstate conflict and the economic impact of climate change at considerable length. Even if these all work in concert with each other, my substantive conclusion is unaffected. I also explicitly discuss the possibility that climate change will cause the use of bioweapons in the report. 

What I am trying to suggest is that by shutting down the possibilities to bio and nuclear war, you reduce the role that climate change could play in bringing about collapse.

I don't shut down the possibility, I argue against it at considerable length. 

However, I really think a lot of your key ideas could so with better citation/definition. I also think that ignoring a lot of these concepts which are common in the literature, and then putting the burden on me to hash out the arguments in a comment on my weekend, rather than actually addressing these concepts, even if you were to reject them, in your 400 page report, is a little odd.

You have made a series of conceptual criticisms of the report. I have said that my conceptual approach is exhaustive, which is true, but you seem to think this is unsatisfactory. I don't think it is unreasonable for you to explain to me what you think I have missed.

A direct climate impact is an impact of climate change for which the proximal cause of the damage is not human-on-human interaction. An example would be something like heat stress deaths or crop failures from drought. An indirect climate impact is an impact for which the proximate cause is human-on-human damage, but for which the ultimate cause is climate change. An example would be crime, conflict or undermines institutions.  

Saying something is the most obvious isn't evidence or a justification. Your report is 400 pages long, I am pretty sure you have space to justify this core part of your methodological approach. Also, just because one thing is the "most" obvious doesn't mean others aren't worthy of consideration. 

I think appeals to common sense and what is obvious are often permissible in arguments. Which indirect effect do you think is more important?

I was expecting the discussion to be more like 'hear is why you are wrong about emissions/climate sensitivity/runaway greenhouse/impacts on the food system/impacts on conflict/...' 

Comment by John G. Halstead (Halstead) on Climate Change & Longtermism: new book-length report · 2022-08-28T17:17:26.848Z · EA · GW

No I didn't get any of that. I don't want to put words in their mouths, but Peter overall seemed very positive. I'm less sure what Goodwin and James thought, but they didn't say anything massively negative, though perhaps they thought it

Comment by John G. Halstead (Halstead) on Climate Change & Longtermism: new book-length report · 2022-08-28T10:34:05.943Z · EA · GW

I think this is a good place to have discussions about claims in specific sections (rather than the whole report) if people would like

Comment by John G. Halstead (Halstead) on Climate Change & Longtermism: new book-length report · 2022-08-28T10:32:21.927Z · EA · GW

Hi Noah, 

(Just noting that I'm not ignoring your comments about methane clathrates, but I don't think you were asking for a response there, but were instead just highlighting some issues for you to look into? Correct me if I'm wrong)

Yes I note that there is deep uncertainty about sea level rise once warming passes 3ºC and that sea level rise might be much higher than estimated. I discuss the impacts this might have in the sea level rise section and the economic costs section

I agree that many specific tipping points haven't made their way into IPCC models

Comment by John G. Halstead (Halstead) on Climate Change & Longtermism: new book-length report · 2022-08-28T10:28:03.430Z · EA · GW

The model should be shared now. 

Yes that is correct re my assessment of the other existential risks. I'm taking a view similar to Toby Ord and I suppose the rest of the EA community about where the main risks are. Of course, my main goal in the report is not to make this substantive case; I largely take it as given. 

I don't really see how viewing climate change as a cascading risk would change the overall risk assessment. If you argue that climate change is a large cascading risk then you would have to think that climate would play an important role in starting the cascade from collapse to extinction. I don't see how it could do that and explain why at length in the report. Can you lay out a concrete scenario that sketches this cascading risk worry that isn't already discussed in the report?

The report does suggest that climate change would make civilisational recovery harder but for plausible levels of warming, it would not be a large barrier to recovery and this should be clear from the substantive discussion in the report

Moreover, you suggest that "there is some chance of civilisational collapse due to nuclear war or engineered pandemics," essentially suggesting other causes of civilisational collapse that are less direct, and therefore could be made more likely due to climate change, are negligable. This assumption should be stated and evidenced, and yet you seem to include no sources on this. 

The whole report is about whether climate change could lead to civilisational collapse or something close to it. What other mechanisms do you have in mind that are not already discussed in the report?

The influence of climate change on great power war or war more generally seems like the most obvious indirect risk of climate change that could make a substantial difference to the scale of climate change. It is often argued that climate change is a threat multiplier for conflict risk. I discuss the literature on this at length. What other indirect risks do you think might be comparably important?

It does seem that you think that viewing climate change as a cascading risk would make a large difference to my conclusion. I don't understand what you think this cascading risk actually is that is not already discussed in the report. 

I'm not sure which comment you are referring to? I argued that the direct/indirect approach is conceptually exhaustive, which is trivially true. 

Comment by John G. Halstead (Halstead) on Climate Change & Longtermism: new book-length report · 2022-08-28T10:11:24.783Z · EA · GW

The reviewers for each section were as follows. Josh Horton reviewed a section on solar geoengineering which I am still in the process of revising for a later version. 

Peter Watson, Goodwin and James Ozden provided comments on various  sections in the report. 

Without wanting to pre-empt the reviewer comments if I am allowed to provide them, there was agreement with what I had written and I accepted the vast majority of proposed revisions. I think the main disagreement was that Keith Wiebe disagreed with some of my claims about extreme warming and agriculture. I think maybe Danny Bressler moderately disagreed with my assessment of Burke et al (2015), but not completely sure.  

Comment by John G. Halstead (Halstead) on Climate Change & Longtermism: new book-length report · 2022-08-28T09:58:15.989Z · EA · GW

I will ask the experts if I can share their feedback. I did ask a couple of them to do this but after a long review process they didn't respond so I decided not to ask the other experts if I could share theirs as I thought it would be weird to have comments on some parts but not others. Maintaining interest in the process from experts can be difficult because they sank a lot of time into reviewing the report and have other things to do so there is a risk of over-asking and them not wanting to engage any more. 

Comment by John G. Halstead (Halstead) on Climate Change & Longtermism: new book-length report · 2022-08-28T09:53:18.352Z · EA · GW

the forum did offer the chance of having agree/disagree on the post, I just forgot to respond. I think it is a beta feature but happy for it to be used on this post

Comment by John G. Halstead (Halstead) on Climate Change & Longtermism: new book-length report · 2022-08-28T09:52:36.684Z · EA · GW

These are good points, I will amend

Comment by John G. Halstead (Halstead) on Climate Change & Longtermism: new book-length report · 2022-08-27T20:56:03.374Z · EA · GW

I think this is fair. I shouldn't have done it and am sorry for doing so

Comment by John G. Halstead (Halstead) on Climate Change & Longtermism: new book-length report · 2022-08-27T07:01:13.818Z · EA · GW

I just assumed you were Cremer because you kept citing all of her work when it didn't seem very relevant. 

Perhaps the authors of the paper would like to share how much Torres contributed that paper and how that might have influenced the reception of the paper

Comment by John G. Halstead (Halstead) on Climate Change & Longtermism: new book-length report · 2022-08-27T06:59:27.047Z · EA · GW

Fair enough on your Takakura point, I misread. 

I'm not sure I understand your second comment. 'hingey' means that we are living at the most influential time ever. This includes things like value change around slavery. 

Comment by John G. Halstead (Halstead) on Climate Change & Longtermism: new book-length report · 2022-08-27T06:52:38.798Z · EA · GW

It might be good to zoom out here and get a sense of what the criticism is here. I am being criticised for not citing four papers. One of them is by you and Kemp, is not peer-reviewed and is not primarily about climate change. The other one is Kemp et al 2022 which was published two weeks before I published my report so I didn't have time to include discussion of it. The other papers I am being criticised for not mentioning are Beard et al and Richards et al. If you want to explain to me why the points they raise are not addressed in my report, I would be happy to have that discussion. 

The Jehn et al papers make claims which are  wrong. It is blatantly not true to anyone who knows anything about climate change that the climate science literature ignores warming of more than 3ºC. 

Comment by John G. Halstead (Halstead) on Climate Change & Longtermism: new book-length report · 2022-08-27T06:42:41.221Z · EA · GW

I think some of the criticism of your paper with Kemp was due to it being co-authored with Phil Torres, who has harassed and defamed many people (including me) because he thinks they have frustrated his career aims

Comment by John G. Halstead (Halstead) on Climate Change & Longtermism: new book-length report · 2022-08-26T23:54:47.678Z · EA · GW

Hi Zoe, what is your proposed alternative to a karma system?