Posts

International Criminal Law and the Future of Humanity: A Theory of the Crime of Omnicide 2021-03-22T12:19:51.445Z
Were the Great Tragedies of History “Mere Ripples”? 2021-02-08T14:56:23.676Z
philosophytorres's Shortform 2021-02-08T14:54:47.048Z
Is Existential Risk a Useless Category? Could the Concept Be Dangerous? 2020-03-31T16:55:10.210Z

Comments

Comment by philosophytorres on Why I find longtermism hard, and what keeps me motivated · 2021-02-26T19:13:58.276Z · EA · GW

Don't work on "longtermist" issues, please! You are very right to feel the pull of suffering right now. See this for more: https://www.xriskology.com/mini-book.

Comment by philosophytorres on Were the Great Tragedies of History “Mere Ripples”? · 2021-02-13T17:08:04.481Z · EA · GW

[Responding to Alex HT above:]

I'll try to find the time to respond to some of these comments. I would strongly disagree with most of them. For example, one that just happened to catch my eye was: "Longtermism does not say our current world is replete with suffering and death."

So, the target of the critique is Bostromism, i.e., the systematic web of normative claims found in Bostrom's work. (Just to clear one thing up, "longtermism" as espoused by "leading" longtermists today has been hugely influenced by Bostromism -- this is a fact, I believe, about intellectual genealogy, which I'll try to touch upon later.)

There are two main ingredients of Bostromism, I argue: total utilitarianism and transhumanism. The latter absolutely does indeed see our world the way many  religious traditions have: wretched, full of suffering, something to ultimately be transcended (if not via the rapture or Parousia then via cyborgization and mind-uploading). This idea, this theme, is so prominent in transhumanist writings that I don't know how anyone could deny it.

Hence, if transhumanism is an integral component of Bostromism (and it is), and if Bostromism is a version of longtermism (which it is, on pretty much any definition), then the millennialist view that our world is in some sort of "fallen state" is an integral component of Bostromism, since this millennialist view is central to the normative aspects of transhumanism.

Just read "Letter from Utopia." It's saturated in a profound longing to escape our present condition and enter some magically paradisiacal future world via the almost supernatural means of radical human enhancement. (Alternatively, you could write a religious scholar about transhumanism. Some have, in fact, written about the ideology. I doubt you'd find anyone who'd reject the claim that transhumanism is imbued with millennialist tendencies!)

Comment by philosophytorres on philosophytorres's Shortform · 2021-02-08T14:54:47.369Z · EA · GW

It's been a year, but I finally wrote up my critique of "longtermism" (of the Bostromian / Toby Ord variety) in some detail. I explain why this ideology could be extremely dangerous -- a claim that, it seems, some others in the community have picked up on recently (which is very encouraging). The book is on Medium here and PDF/EPUB versions can be downloaded here.

Comment by philosophytorres on Clarifying existential risks and existential catastrophes · 2020-04-25T13:16:13.269Z · EA · GW

One is here: https://docs.wixstatic.com/ugd/d9aaad_64ac5f0da7ea494ab48f54181b249ce4.pdf. And my critique of the radical utopianism and valuation of imaginary lives that undergirds the most prominent notion of "existential risk" today is here: https://c8df8822-f112-4676-8332-ad89713358e3.filesusr.com/ugd/d9aaad_33466a921b2646a7a02482acb89b07b8.pdf

Comment by philosophytorres on Clarifying existential risks and existential catastrophes · 2020-04-25T13:09:59.907Z · EA · GW

Have you seen my papers on the topic, by chance? One is published in Inquiry, the other is forthcoming. Send me an email if you'd like!

Comment by philosophytorres on Is Existential Risk a Useless Category? Could the Concept Be Dangerous? · 2020-04-02T22:50:05.644Z · EA · GW

John: Do I have your permission to release screenshots of our exchange? You write: "... including persistently sending me messages on Facebook." I believe that this is very misleading.

Comment by philosophytorres on Response to recent criticisms of EA "longtermist" thinking · 2020-01-13T16:07:24.000Z · EA · GW

You don't even have the common courtesy of citing the original post so that people can decide for themselves whether you've accurately represented my arguments (you haven't). This is very typical "authoritarian" (or controlling) EA behavior in my experience: rather than given critics an actual fair hearing, which would be the intellectually honest thing, you try to monopolize and control the narrative by not citing the original source, and then reformulating all the arguments while at the same time describing these reformulations as "steelmanned" versions (which some folks who give EA the benefit of the doubt might just accept), despite the fact that the original author (me) thinks you've done a truly abysmal job at accurately presenting the critique. As mentioned, this will definitely get cited in a forthcoming article; it really does embody much of what's epistemically wrong with this community.

Comment by philosophytorres on Response to recent criticisms of EA "longtermist" thinking · 2020-01-10T04:30:20.251Z · EA · GW

Your "steelmanning" is abysmal, in my opinion. It really doesn't represent the substance of my criticisms. I will definitely be citing this post in a forthcoming journal paper on the issue.

Comment by philosophytorres on Response to recent criticisms of EA "longtermist" thinking · 2020-01-09T21:50:32.182Z · EA · GW

Virtually every point here misrepresents what I wrote. I commend your take-down of various straw men, but you really did miss the main thrust (and details) of the critique. I suspect that you would (notably) fail an Ideological Turing Test.

Comment by philosophytorres on Book Review: Enlightenment Now, by Steven Pinker · 2019-01-26T22:19:45.374Z · EA · GW

Sloppy scholarship. Please do take a look, if you have a moment: https://www.salon.com/2019/01/26/steven-pinkers-fake-enlightenment-his-book-is-full-of-misleading-claims-and-false-assertions/.

Comment by philosophytorres on William MacAskill misrepresents much of the evidence underlying his key arguments in "Doing Good Better" · 2018-11-20T03:47:31.558Z · EA · GW

As it happens, I found numerous cases of truly egregious cherry-picking, demonstrably false statements, and (no, I'm not kidding) out-of-context mined quotes in just a few pages of Pinker's "Enlightenment Now." Take a look for yourself. The terrible scholarship is shocking. https://docs.wixstatic.com/ugd/d9aaad_8b76c6c86f314d0288161ae8a47a9821.pdf

Comment by philosophytorres on EA Hotel with free accommodation and board for two years · 2018-06-06T18:37:43.071Z · EA · GW

Wow, this is absolutely stunning. I can't myself participate, but I genuinely hope this project takes off. I'm sure you're familiar with the famous (but not demolished) Building 20 at MIT: https://en.wikipedia.org/wiki/Building_20. It provided a space for interdisciplinary work -- and wow, the results were truly amazing.

Comment by philosophytorres on What does Trump mean for EA? · 2016-11-14T22:25:34.294Z · EA · GW

Friends: I recently wrote a few thousand words on the implications that a Trump presidency will have for global risk. I'm fairly new to this discussion group, so I hope posting the link doesn't contravene any community norms. Really, I would eagerly welcome feedback on this. My prognosis is not good.

https://medium.com/@philosophytorres/what-a-trump-presidency-means-for-human-survival-one-experts-take-ed26bf9f9a21

Comment by philosophytorres on Some considerations for different ways to reduce x-risk · 2016-11-05T15:55:28.566Z · EA · GW

A fantastically interesting article. I wish I'd seen it earlier -- about the time this was published (last February) I was completing an article on "agential risks" that ended up in the Journal of Evolution and Technology. In it, I distinguish between "existential risks" and "stagnation risks," each of which corresponds to one of the disjuncts in Bostrom's original definition. Since these have different implications -- I argue -- for understanding different kinds of agential risks, I think it would be good to standardize the nomenclature. Perhaps "population risks" and "quality risks" are preferable (although I'm not sure "quality risks" and "stagnations risks" have exactly the same extension). Thoughts?

(Btw, the JET article is here: http://jetpress.org/v26.2/torres.pdf.)

Comment by philosophytorres on Two Strange Things About AI Safety Policy · 2016-11-05T01:01:51.529Z · EA · GW

Oh, I see. Did they not ask for his approval? I'm familiar with websites devising their own outrageously hyperbolic headlines for articles authored by others, but I genuinely assumed that a website as reputable as Slate would have asked a figure as prominent as Bostrom for approval. My apologies!

Comment by philosophytorres on The Map of Impact Risks and Asteroid Defense · 2016-11-05T00:44:46.087Z · EA · GW

Very interesting map. Lots of good information.

Comment by philosophytorres on Two Strange Things About AI Safety Policy · 2016-10-02T02:04:22.568Z · EA · GW

How about this for AI publicity, written by Nick Bostrom himself: "You Should Be Terrified of Superintelligent Machines," via Slate!

http://www.slate.com/articles/technology/future_tense/2014/09/will_artificial_intelligence_turn_on_us_robots_are_nothing_like_humans_and.html