Posts

We are fighting a shared battle (a call for a different approach to AI Strategy) 2023-03-16T14:37:16.832Z
Two important recent AI Talks- Gebru and Lazar 2023-03-06T01:30:29.849Z
RESILIENCER Workshop Report on Solar Radiation Modification Research and Existential Risk Released 2023-02-03T18:58:26.459Z
Diversity In Existential Risk Studies Survey: SJ Beard 2022-11-25T16:29:11.191Z
Beyond Simple Existential Risk: Survival in a Complex Interconnected World 2022-11-21T14:35:41.920Z
Some important questions for the EA Leadership 2022-11-16T17:10:26.505Z
What 80000 Hours gets wrong about solar geoengineering 2022-08-29T13:24:20.542Z
Ok Doomer! SRM and Catastrophic Risk Podcast 2022-08-20T12:22:41.149Z

Comments

Comment by Gideon Futerman on Exploring Metaculus’ community predictions · 2023-03-24T15:48:00.833Z · EA · GW

Thanks for this. What does this data further out from resolution look like for community predictions?

Comment by Gideon Futerman on Exploring Metaculus’ community predictions · 2023-03-24T11:48:25.401Z · EA · GW

What would the brier score be if it involved forecasts significantly far removed from the event (6 months, 1 year, 2 years let's say?)

Comment by Gideon Futerman on We are fighting a shared battle (a call for a different approach to AI Strategy) · 2023-03-18T00:57:32.512Z · EA · GW

A number of things. Firstly, this criticism may be straightforwardly correct; it may be pursuing something that is the first time in history (I'm less convinced eg bioweapons regulation etc) ; nonetheless, other approaches to TAI governance seem similar (eg trust 1 actor to develop a transformative and risky technology and not use it for ill). It may indeed require such change, or at least change of perceptionof the potential and danger of AI (which is possible). Secondly, this may not be the case. Foundation models (our present worry) may be no more (or even less) beneficial in military contexts than narrow systems. Moreover, foundation models, developed by private actors, seem pretty challenging to their power in a way that neither the Chinese government nor US military is likely to accept. Thus, AI development may continue without dangerous model growth. Finally, very little development of foundation models are driven by military actors, and the actors that do develop it may be constructed as legitimately trying to challenge state power. If we are on a path to TAI (we may not be), then it seems in the near term only a very small number of actors, all private, could develop it. Maybe the US Military could gain the capacity to, but it seems hard at the moment for them to

Comment by Gideon Futerman on We are fighting a shared battle (a call for a different approach to AI Strategy) · 2023-03-16T18:20:29.736Z · EA · GW

Just quickly on that last point: I recognise there is a lot of uncertainty (hence the disclaimer at the beginning). I didn't go through the possible counterarguments because the piece was already so long! Thanks for your comment though, and I will get to the rest of it later!

Comment by Gideon Futerman on We are fighting a shared battle (a call for a different approach to AI Strategy) · 2023-03-16T16:11:26.947Z · EA · GW

'expected harm can be still much lower' this may be correct, but not convinced its orders of magnitude. And it also hugely depends on ones ethical viewpoint. My argument here isn't that under all ethical theories this difference doesn't matter (it obviously does), but that to the actions of my proposed combined AI Safety and Ethics knowledge network that this distinction actually matters very little.  This I think answers your second point as well; I am addressing this call to people who broadly think that on the current path, risks are too high. If you think we are nowhere near AGI and that near term AI harms aren't that important, then this essay simply isn't addressed to you. 

I think this is the core point I'm making. It is not that the stochastic parrots vs superintelligence distinction is  necessarily irrelevant if one is deciding for oneself if to care about AI. However, once one thinks that the dangers of the status quo are too high for whatever reason,  then the distinction stops mattering very much. 

Comment by Gideon Futerman on We are fighting a shared battle (a call for a different approach to AI Strategy) · 2023-03-16T15:05:51.908Z · EA · GW

I think this is a) not necessarily true (as shown in the essay, it could still lead to existential catastrophe eg by integration with nuclear command and control) and b) if we ought to sll be pulling in the same direction against these companies, why is the magnitude difference relevant. Moreover, your claims would suggest that the AI Ethics crowd would be more pro-AI development than the AI Safety crowd. In practice, the opposite is true, so I'm not really sure why your argument holds

Comment by Gideon Futerman on Racial and gender demographics at EA Global in 2022 · 2023-03-12T14:32:39.114Z · EA · GW

I'd be pretty interested in this as well, particularly age. I feel political orientation may be a little harder to collect, as what these terms mean differs in different countries, although there could still be ways to deal with this

Comment by Gideon Futerman on Anthropic: Core Views on AI Safety: When, Why, What, and How · 2023-03-10T11:22:04.385Z · EA · GW

This would surprise me, given the funding available to Anthropic anyway?

Comment by Gideon Futerman on Continuing the discourse on "Doing EA Better": A Response to Ozy's Thoughts · 2023-03-10T11:17:58.781Z · EA · GW

I found the implication in Ozy's piece that the homogeneity section was trying to say that any of those traits were bad, particularly because ConcernedEAs said the 'Sam'description fit them all very well. It seems really strange therefore, to suggest they think neurodivergent people are somehow bad. Also, I found the implication that saying the avergae EA was culturally protestant is Antisemitic a little bit bizarre, and I would quite like Ozy to justify this a bit more

Comment by Gideon Futerman on Anthropic: Core Views on AI Safety: When, Why, What, and How · 2023-03-09T17:53:00.627Z · EA · GW

If leading AI labs (OpenAI/Deepmind) shut down, or took voluntary policies to slow down progress to AGI (limits on number of parameters on models etc), would Anthropic follow (I assume you would?) Do you see yourself in a race that you must win, such that if the other parties dropped out, you'd still want to achieve AGI, or are you only at the frontier to keep doing up to date research?

Comment by Gideon Futerman on Two important recent AI Talks- Gebru and Lazar · 2023-03-06T02:14:49.384Z · EA · GW

If you watch from when I suggest in the link, I think it's less bad than you make out

Comment by Gideon Futerman on Call to demand answers from Anthropic about joining the AI race · 2023-03-05T01:22:40.379Z · EA · GW

Nah it maybe seems like I was wrong. If so, apologies OP!

Comment by Gideon Futerman on Call to demand answers from Anthropic about joining the AI race · 2023-03-05T00:59:28.774Z · EA · GW

Ah apologies, my mistake I didn't know, possibly wrong of me to assume this was in bad faith, and I definitely don't want to tell trans people how to refer to themselves.

Comment by Gideon Futerman on Call to demand answers from Anthropic about joining the AI race · 2023-03-03T11:17:53.341Z · EA · GW

Can you please take this comment down or edit it given you have inexplicably used a slur (not that there ever is a good context)

Comment by Gideon Futerman on What happened to the Future of Humanity Foundation? · 2023-03-02T00:27:09.168Z · EA · GW

Ye, Anders and Toby organise it, and the sessions have between 4-15 attendees. Plus they have had external people run sessions recently (I ran one, SJ Beard from CSER ran one)

Comment by Gideon Futerman on What happened to the Future of Humanity Foundation? · 2023-02-28T23:30:48.799Z · EA · GW

Big picture salons (basically a seminar) happen every week

Comment by Gideon Futerman on We are incredibly homogenous · 2023-02-24T16:47:11.252Z · EA · GW

Not sure at all how Doing EA Better is 'quite anti-semitic', and I certainly think accusations of anti-semitism shouldn't just be thrown around, particularly given how common a problem of anti-semitism actually is. I certainly don't see how  a rather amusing sterotypical EA description as 'culturally Protestant' is antisemitic; whilst there are lots of us Jews in EA, I'm not sure I find it at all offensive to not be mentioned!

I also strongly disagree that Doing EA Better suggests having lots of Sams in it is bad (hell, they say that such a description fits 'Several of the authors of this post fit this description eerily well'), and so I'm not sure the accusations of, say, anti-neurodivergent people or antisemitism really hold much water. I also don't get how 'eats a narrow range of vegan ready meals' becomes 'think being vegan is bad'; it reads to me like a comment on how cultuyrally homogenous we are that huel, bol and planty etc could be a cultural thing, rather than all the other vegan foods out there 

Comment by Gideon Futerman on What happened to FHI's Research Scholars Program? · 2023-02-22T21:40:16.546Z · EA · GW

As far as I understand it (I'm not at FHI by the way) a lot of issues at FHI are basically due to differences between FHI and the Uni/Philosophy, plus changing faculty heads etc meaning AFAIK FHI aren't hiring anoyone new right now

Comment by Gideon Futerman on FYI there is a German institute studying sociological aspects of existential risk · 2023-02-13T16:23:49.719Z · EA · GW

The distinction between scientists and philosophers of science doesn't massively seem apt. Their work is primarily critical, similar to the work of sociologists of science or STS scholars rather than philosophers of science

Comment by Gideon Futerman on Unjournal's 1st eval is up: Resilient foods paper (Denkenberger et al) & AMA ~48 hours · 2023-02-06T20:33:46.198Z · EA · GW

This is an absolutely amazing project, thanks so much! Are you intending on doing evaluation of 'canonical non-peer reviewed EA work?

Comment by Gideon Futerman on Celebrating EAGxLatAm and EAGxIndia · 2023-01-26T22:24:03.962Z · EA · GW

This is so amazing, congratulations for everyone involved! It makes me very proud to see these conferences organised around the world!

Comment by Gideon Futerman on Excerpts from "Doing EA Better" on x-risk methodology · 2023-01-26T11:06:47.737Z · EA · GW

I think something that's important here is that indirect arguments can show that given other approaches, you may come to different conclusions; not just on prioritisation of 'risks'(I hate using that word!), but also on techniques to reduce those as well. For instance, I still think that AI and Biorisk are extremely significant contributors to risk, but probably would take on pretty different approaches to how we deal with this based on trying to consider these more indirect criticisms of the methodologies etc used

Comment by Gideon Futerman on 2022 EA conference talks are now live · 2023-01-18T19:58:39.926Z · EA · GW

When are the EAGx Rotterdam talks going up?

Comment by Gideon Futerman on Doing EA Better · 2023-01-18T17:34:37.718Z · EA · GW

I guess I'm a bit skeptical of this, given that Buck has said this to weeatquince "I would prefer an EA Forum without your critical writing on it, because I think your critical writing has similar problems to this post (for similar reasons to the comment Rohin made here), and I think that posts like this/yours are fairly unhelpful, distracting, and unpleasant. In my opinion, it is fair game for me to make truthful comments that cause people to feel less incentivized to write posts like this one (or yours) in future". 

Comment by Gideon Futerman on Doing EA Better · 2023-01-18T17:12:29.188Z · EA · GW

Yes I think this is somewhat true, but I think that this is better than the status quo of EA at the moment.

One thing to do, which I am trying to do, is actually get more domain experts involved in things around EA, and talk to them more about how this stuff works, rather than deferring to anonymous ConcernedEAs or to a small group of very powerful EAs on this, but rather actually try and build a diverse epistemic community with many perspectives involved, which is what I interpret as the core claim of this manifesto

Comment by Gideon Futerman on Doing EA Better · 2023-01-18T16:53:10.898Z · EA · GW

I think to some extent this is fair. This strikes me as a post put together by non-experts, so I wouldn't be surprised if there are aspects of the post that is wrong. I think the approach I've taken is to have this is a list of possible criticisms, but probably that contains a number of issues. The idea is to steelman the important ones and  reject the one's we have reason to reject, rather than reject the whole. I think its fair to have more scepticism though, and I certainly would have liked a fuller bibliography, with experts on every area weighing in, but I suspect that the 'ConcernedEAs' probably didn't have the capacity for this.

Comment by Gideon Futerman on Doing EA Better · 2023-01-18T07:34:17.061Z · EA · GW

Thanks for this, and on reading other comments etc, I was probably overly harsh on you for doing so.

Comment by Gideon Futerman on Doing EA Better · 2023-01-18T01:23:01.350Z · EA · GW

Maybe your correct, and that's definitely how I interpreted it initially, but Buck's response to me gave a different impression. Maybe I'm wrong, but it just strikes me as a little strange if Buck feels they have considered these ideas and basically rejects them, why they would want to suggest to these bunch of concerned EAs how to go about trying to push for the ideas that Buck disagrees with better. Maybe I'm wrong or have misinterpreted something though, I wouldn't be surprised

Comment by Gideon Futerman on Doing EA Better · 2023-01-18T01:13:58.426Z · EA · GW

I think I basically agree here, and I think it's mostly about a balance; criticism should, I think, be seen as pulling in a direction rather than wanting to go all the way to an extreme (although there definitely are people who want that extreme who I strongly disagree with!) On AI safety, I think in a few years it will look like EA overinvested in the wrong approaches (ie helping OpenAI and not opposing capabilities). I think I agree the post sees voting/epistemic democracy in a too rosy eyed way. On the other hand, I am aware of being told by a philosopher of science I know that xrisk was the most hierarchical field they'd seen. Moreover, I think democracy can come in gradations, and I don't think ea will ever be perfect. On your thing of youth, I think that's interesting. I'm not sure the current culture would necessarily allow this though, with many critical EAs I know essentially scared to share criticism, or having had themselves sidelined by people with more power who disagree, or had the credit for their achievements taken by people more senior making it harder for them to have legitimacy to push for change etc. This is why I like the cultural points this post makes, as it does seem we need a better culture to achieve our ideals

Comment by Gideon Futerman on Doing EA Better · 2023-01-18T01:03:38.090Z · EA · GW

Sure!

Comment by Gideon Futerman on Doing EA Better · 2023-01-18T00:55:47.146Z · EA · GW

I hate to say it, but I'm really quite sure Emile wouldn't write a critique like this; it really doesn't read at all like them. They also have a knack of being very public when writing critiques, and even telling people in advance of this. Their primary audience is certainly not EA. But I agree if people are worried then using a throwaway email is good practice!

Comment by Gideon Futerman on Doing EA Better · 2023-01-18T00:51:06.411Z · EA · GW

I'm doing a project on how we should study xrisk, and I'd love to talk to you about your risk management work etc. Would you be up for a call?

Comment by Gideon Futerman on Doing EA Better · 2023-01-18T00:17:04.574Z · EA · GW

I think this is pretty interesting and thanks for sharing your thoughys! There's things here I agree with, things I disagree with, and I might say more when I'm on my computer not phone!. However, I'd love to call about this to talk more, and see

Comment by Gideon Futerman on Doing EA Better · 2023-01-18T00:15:14.986Z · EA · GW

One thing I think is decentralised funding will also make things like the FLI affair probably more likely. On the other hand, if this is happening already, and there are systematic biases anyway, and there is reduction in creativity, its a risk I'm willing to take. Lottery funding and breaking up funders into a few more bodies (eg 5-10 rather than the same roughly 2 or so?) Is what I'm most excited for, as they seem to reduce some of the risk whilst keeping a lot of the benefits

Comment by Gideon Futerman on Doing EA Better · 2023-01-18T00:10:57.224Z · EA · GW

Hi Sanjay, I'm actually working on a project on pluralism in XRisk, and what fields may have something to add to the discussion. Would you be up for a chat/put me in contact with people who would be up for a chat with me about lessons that can be learn from actuarial studies / Solvency 2?

Comment by Gideon Futerman on Doing EA Better · 2023-01-18T00:07:13.635Z · EA · GW

There is, it should be on the cea youtube channel at some point. It is also a forum post:https://forum.effectivealtruism.org/posts/cXH2sG3taM5hKbiva/beyond-simple-existential-risk-survival-in-a-complex#:~:text=It sees the future as,perhaps at least as important.

Comment by Gideon Futerman on Doing EA Better · 2023-01-18T00:04:42.057Z · EA · GW

I'd be pretty interested in you laying out in depth why you have basically decides to dismiss these very varied and large set of arguments.(Full disclosure: I don’t agree with all of them, but in general I think there pretty good) A self admitted EA leader posting a response poo-pooing a long thought out criticism with very little argumentation, and mostly criticising it on tangential ToC grounds (which you don't think or want to succeed anyway?) seems like it could be construed to be pretty bad faith and problematic. I don’t normally reply like this, but I think your original replied has essentially tried to play the man and not the ball, and I would expect better from a self-identified 'central EA' (not saying this is some massive failing, and I'm sure I've done similar myself a few times)

Comment by Gideon Futerman on Doing EA Better · 2023-01-17T23:56:00.810Z · EA · GW

So OpenPhil is split into different teams, but I'll focus specifically on their grants in XRisk/Longtermism. OpenPhil, either directly or indirectly, are essentially the only major funder of XRisk. Most other funders essentially follow OpenPhil. Even though I think they are very competent, the fact the field has one monolithic funder isn't great for diversity and creativity; certainly I've heard a philosopher of science describe xrisk as one of the most hierarchical fields they have seen, a lot due to this. OpenPhil/Dustin Moskovitz have assets. They could break up into a number of legal entities with their own assets, some overlapping on cause area (eg 2 or 3 xrisk funders). You would want them to be culturally different; work from different offices, have people with different approaches to xrisk etc. This could really help reduce the hierarchy and lack of creativity in this field. Some other funding ideas/structures are discussed here https://www.sciencedirect.com/science/article/abs/pii/S0039368117303278

Comment by Gideon Futerman on Doing EA Better · 2023-01-17T23:46:01.523Z · EA · GW

One thing I think this comment ignores is just how many of the suggestions are cultural, and thus do need broad communal buy in, which I assume is why they sent this publicly. Whilst they are busy, I'd be pretty disappointed if the core EAs didn't read this and take the ideas seriously (ive tried tagging dome on twitter), and if you're correct that presenting such a detailed set of ideas on the forum is not enough to get core EAs to take the ideas seriously I'd be concerned about where there was places for people to get their ideas taken seriously. I'm lucky, I can walk into Trajan house and knock on peoples doors, but others presumably aren't so lucky, and you would hope that a forum post that generated a lot of discussion would be taken seriously. Moreover, if you are concerned with the ideas presented here not getting fair hearing, maybe you could try raising salient ideas to core EAs in your social circles?

Comment by Gideon Futerman on Doing EA Better · 2023-01-17T23:29:32.912Z · EA · GW

This comment is pretty long, but TLDR: peer review and academia have their own problems, some similar to EA, some not. Maybe a hybrid approach works, and maybe we should consult with people with expertise in social organisation of science. 

To some extent I agree with this. Whilst I've been wanting more academic rigour in X-Risk for a while, peer review certainly is no perfect panacea, although I think it is probably better than the current culture of deferring to blog posts as much as we do. 

I think you are right that traditional academia really has its problems, and name recognition is also still an issue (eg Nobel Prize winners are 70% more likely to get through peer review etc.). Nonetheless, certainly from the field (solar geoengineering) that I have been in, name recognition and agreement with the 'thought leaders' is definitely less incentivised than in EA. 

One potential response is to think a balance between peer review, the current EA culture and commissioned reports is a good balance. We could set up an X-Risk journal with some editors and reviewers who are a) dedicated to pluralism and b) will publish things that are methodologically sound irrespective of result. Alternatively, a sort of open peer review system where pre-prints are published publically, with reviewers comments and then responses to these as well.  However, for major decisions we could rely on reports written and invetigated by a number of people. Open Phil have done this to an extent, but having broader panels etc. to do these reports may be much more useful.  Certainly its something to try.

I do think its really difficult though, but I think the current EA status quo is not working. Perhaps EA consulting with some thinkers on the social organisation of science to better design how we do these things may be good, as there certainly are people with this expertise.

And it is definitely possible to commission structured expert elicitations, and is possible to directly fund specific bits of research.

Moreover, another thing about peer review is that it can sometimes be pretty important for policy. This is certainly the case for the climate change space, where you won't be incorporated into UN decision making and IPCC reports unless your peer reviewed.

Finally, I think your points about the 'agreed upon methods' sort of thing is really good, and this is something I'm trying to work on in XRisk. I talk about this a little in my 'Beyond Simple Existential Risk' talk, and am writing a paper with Anders Sandberg, SJ Beard and Adrian Currie on this at present. I'd be keen to hear your thoughts on this if your interested!

Comment by Gideon Futerman on Doing EA Better · 2023-01-17T23:17:48.611Z · EA · GW

I think this hits the nail on the head. Funding is the issue, it always is.

One thing I've been thinking about recently is maybe we should break up OpenPhil, particularly the XRisk side (as they are basically the sole XRisk funder) . This is not because I think OpenPhil is not great (afaik they are one of the best philanthropic funds out there), but because having essentially a single funder dictate everything that gets funded in a field isn't good, whether that funder is good or not. I wouldn't trust myself to run such a funding boday either.

Comment by Gideon Futerman on Thread for discussing Bostrom's email and apology · 2023-01-15T02:14:56.082Z · EA · GW

And the equivocal nature of the apology was also bad (and perhaps more morally relevant, as it is current rather than historic)

Comment by Gideon Futerman on [Linkpost] Nick Bostrom's "Apology for an Old Email" · 2023-01-12T11:53:52.998Z · EA · GW

There is the other side as well: not only is this bad because one person expressed racist sentiment, but it is particularly bad because that person is considered a thought leader in EA and so this could make the EA community a considerably less welcoming environment for black people ( although I wouldn't like to speak on their behalf)

Comment by Gideon Futerman on What specific changes should we as a community make to the effective altruism community? [Stage 1] · 2023-01-05T17:58:02.061Z · EA · GW

Or a suggestion to open up discussion. There are many more structures that would be more democratic (regranting, assembly groups of EAs, large scale voting, hell even random allocation), but the principle here is essentially saying at the moment ea funds is far too centralised and thus we need a discussion as to how we should do things more democratically. I don’t profess to have all the answers! Moreover, I think I am not sure the idea it is an Applause Light makes sense in the context of it being massively disagree voted. That is pretty much to opposite of what you would expect!

Comment by Gideon Futerman on Beyond Simple Existential Risk: Survival in a Complex Interconnected World · 2023-01-03T02:08:57.403Z · EA · GW

Hi John, sorry this has taken a while. 

  • In particular, climate economy models still do bad at the heavy tail, not just of warming, but at civilisational vulnerability etc, again presenting a pretty "middle of the road" rather than heavy tailed distribution. The sort of work from Beard et al 2021 for instance highlights something I think the models pretty profoundly miss. Similarly, I'd be really interested in research similar to Mani et al 2021 on extreme weather events and how this may change due to climate change.
  • I dpon't see why the models discount the idea that there is a low but non-negligable probability of catastrophic consequences from 3-4 degrees of warming. What aspect of the models? I'm reticent to rely on things like damage functions here, as they don't seem to engage with the possib;le heavy-tailedness of damage. Whilst I agree that the models probably are decent approximations of reality, I'm just not really very sure they are useful at telling us anything about the low probabil;ity high impact scenarios that we are worried about here.
  • Whilst I agree there are reasons to think our vulnerability is less, there is clear reasons to think with a growing interconnected (and potentially fragile) global network and economy, our vulnerability is increasing, meaning that whilst the past collapse data might not be prophetic, there is at least value in it; after all, we are in a very evidence poor environment, meaning that I would be reticent to dismiss it as strongly as you seem to. And whilst it is true our agricultural system is more resilient, there is still a possibility of multiple breadbasket failures etc caused by climate change, and the beard et al and richards et al both explore plausible pathways to this. Again, whilst the past collapse data is definitely not a slam dunk in my favour, I would at least argue it is an update nonetheless. I think you might argue the fact that none led to human extinction makes that data an update in yopur direction, and i think your view on this depends on whether you see collapse and GCR and extinction on a continuum or not; I broadly do, and I assume you broadly don't?
  • When I said one data point, I meant really one study. The reason I say this, is as cited, studies of different species/ species groups. In your comment, you don't seem to engage with Song et al 2021.  Kaiho at al 2022 also shows a positive relationship between warming and extinction rate. Moreover, I think it takes an overly confident view of our understanding of kill mechanisms, and seems to suggest that just because we don't have all what you speculate were the important factors that were present in past mass extinctions doesn't make that not useful evidence.  I think a position like Keller et al 2018 (PETM as the best case, KPg as the worst case) is probably useful at looking at this (only using modern evidence!). Once again, this is an attempt by me, in a low evidence situation, to make best use of the evidence available, and I don't find your points compelling enough to make me not think that this past precident can't be informative. 
  • On the Planetary Boundaries, you don't seem to be engaging with what I'm saying here, which is most alluding to the Baum et al paper on this. Moreover, even if you think we are to use EV, what are you basing the probabilities on? I assume some sort of subjective bayesianism, in which case you'll have to tell me why I should put a decently high (>1%) prior on moving beyond certain Holocene boundaries posing a genuine threat to humanity? That seems perfectly reasonable to me
  • I'm not really sure I understand the argument? Whilst in some ways the world has indeed got less vulnerable, in other ways it has got more connected, more economically vulnerable to natural disasters etc. Cascading impact seems to be seen more along these lines than along others. Moreover, if you only had a 5% probability of such a cascade occuring over a century, and we have hardly had a hyper-globalised economy for even that long, why would you expect it to have happened already? Your statements here seem pretty out of step with my actual probabilities etc.. And as I talk about in my talk, I also see problems from AI, biorisk and a whole host more. Thats why this talk, and this approach, is seriously not just about climate change; the hope is to add another approach to studying X-Risk.
  • I'm also pretty interested in your approach to evidence on X-Risk. I should say from the outset that I think climate change is unlikely to cause a catastrophe, but I don't think you have provided compelling evidence that the probability is exceptionally small. Your evidence often seems to rely on the very things that we think ought to be suspect in X-Risk scenarios (economic models, continued improved resilience, best case scenario analogies etc.), and you seem to reject some things that might be useful for reasoning in such evidence poor environments (plausibly useful but somewhat flawed historical analogies, foresight, storytelling, scenarios etc.) .  Basically, you seem to have a pretty high bar for evidence to be worried about climate change, which whilst I in general think is useful, I'm just not sure how appropriate it is in such an evidence poor environment as X-Risk, including climate change contributions to it. Its pretty interesting that you seem very willing to rely on much more speculative evidence for AI and biorisk (eg probabilistic forecasts which don't have track records of being able to work well over such long time scales), and I genuinely wonder why this is. Note that such more speculative approaches (in this case superforecasters) gave a 1% probability of climate change being a necessary but not sufficent cause of human extinction by 2100, and gave an even higher probability to global catastrophe by 2100, which certainly then has the probability of later leading to extinction. Whilst I myself am somewhat sceptical of such approaches, I'd be interested in seeing why you seem accepting of them for bio and AI but not climate? Is it because you see evaluation of the existential risk from climate change as a much more evidence rich environment than for bio/AI?
Comment by Gideon Futerman on [deleted post] 2022-12-25T12:38:30.869Z

Just for note, I posted a link to this survey SJ Beard is doing on their project on diversity in X-Risk (https://forum.effectivealtruism.org/posts/3iSoc7EBLQWrwGdCE/diversity-in-existential-risk-studies-survey-sj-beard). In total it only got 6 votes, and ended up on 7 Karma. 6 of which was due to my double up vote, so it was on 1 Karma off 5 votes. Of course that's a very small sample size, but perhaps that says something about how we view diversity and X-Risk in the EA community (or maybe it just says something about me!)

Comment by Gideon Futerman on [deleted post] 2022-12-25T11:55:40.119Z

I can certainly say my upvote was for the former and not the latter

Comment by Gideon Futerman on Effective altruism is worse than traditional philanthropy in the way it excludes the extreme poor in the global south. · 2022-12-17T18:06:26.538Z · EA · GW

The only EA aligned charity I can find doing anyrhing in Africa with maize is the food fortification initiative, although they don't really fit Anthony's description, so he may be referring to a different EA aligned charity

Comment by Gideon Futerman on How would you estimate the value of delaying AGI by 1 day, in marginal GiveWell donations? · 2022-12-16T15:49:03.713Z · EA · GW

I'm interested in why you don't think AI doom is likely, given a lot of people in the AI safety space at least seem to suggest it's reasonably likely (>10% likelihood in the next 10 or 20 years)

Comment by Gideon Futerman on Beyond Simple Existential Risk: Survival in a Complex Interconnected World · 2022-12-12T22:28:35.872Z · EA · GW

To answer each of your points in turn

  • I think its important to note that much of the literature looking at those estimates for extreme scenarios (not just extreme levels of warming, but other facets of the extremes as well), has suggested that current techniques for calculating climate damage aren't great at the extremes, and tend to function well only when close to status quo. So we should expect that these models don't act appropriately under the conditions we are interested in when exploring GCR/X-Risk. This has pretty commonly been discussed in the literature on these things (Beard et al 2021, Kemp et al 2022, Wagner &Weitzmann 2015, Weaver et al 2010 etc.)
  • I still think past events can give us useful information. Firstly, climate change has been a contributing factor to A LOT of societal collapses; whilst these aren't perfect analagies and do show a tremendous capacity of humanity to adapt and survive, they do show the capacity of climate change to contribute to major socio-political-technological crises, which may act as a useful proxy for what we are trying to look for. Moreover, whilst a collapse isn't an extinction, if we care about existential risk, we might indeed be pretty worried about collapse if it makes certain lock-in more or less likely, but to be honest thats a discussion for another time. Moreover, whilst I think your paleoclimatic argument is somewhat reasonable, given the limited data here (and your reliance on a few data points + a large reliance on a single study of plant diversity (which is fine by the way, we have limited data in general!)), I don't find it hugely comforting. Particularly because climate change seems to have been a major factor in all of the big 5 mass extinction events, and the trends that Song et al 2021 note in their analysis of temperature change and mass extinction over the Phraneozoic. They mostly use marine animals. When dealing with pass processes, explainations are obviously difficult to disentangle, so there are reasons to be sceptical of the causal explanatory power of Song's analysis, although obvious such similar uncertainty should be applied to your analysis, particularly with the claims of this fundamental step change 145 million years ago. 
  • Whilst planetary boundaries do have their flaws and to some degree where they are set is quasi-arbitary, as discussed in the talk, something like this may be necessary when acting under such deep uncertainty; don't walk out into the dark forest and all that. Moreover, I think your report fails to argue convincingly against the BRIHN framework that Baum et al 2014 developed, in part in response to the Nordhaus criticisms which you cite. 
  • Extreme climate change is not just RCP 8.5/ SSP5-8.5, its much broader than that. Kemp et al 2022's response to Burgess et al's comment lays out this argument decently well, as does Climate Endgame itself. 
  • I don't really understand this point, particularly in response to my talk. I explicitly suggest in my talk I think systemic risk, which those could all contribute to, are very important. The call for more complex risk assessment (the core point of the talk alongside a call for pluralism) is that there are likely significant limits to conventional economic analysis in analysing complex risk. The disagreement on this entire point seems to be explained reasonably well by the difference between the simple/complex approach. 
  • I think your causal pathways are too simple and defined (ie they are those 1st and 2nd order indirect impacts), and probably don't account for the ways in which climate could contribute to cascading risk. Whilst of course this is still under explored, some of the concepts in Beard et al 2021 and Richards et al 2021 are a useful starting place, and I don't really see how your report refutes the concepts around cascades they bring up. I'd also like to agree these cascades are really hard to understand, but I struggle to see how that fact acts in the favour of your approach and conclusions?

I hope this has helped show some of our disagreements! :-)