[Link] The option value of civilization

2019-01-06T09:58:17.919Z · score: 0 (3 votes)
Comment by technicalities on Existential risk as common cause · 2018-12-09T15:29:13.032Z · score: 3 (2 votes) · EA · GW

No idea, sorry. I know CSER have held at least one workshop about Trump and populism, so maybe try Julius Weitzdoerfer:

[Trump] will make people aware that they have to think about risks, but, in a world where scientific evidence isn't taken into account, all the threats we face will increase.
Comment by technicalities on Existential risk as common cause · 2018-12-09T15:17:36.405Z · score: 1 (1 votes) · EA · GW

You're right. I think I had in mind 'AI and nanotech' when I said that.

Comment by technicalities on Existential risk as common cause · 2018-12-08T11:40:01.325Z · score: 4 (3 votes) · EA · GW

I haven't read much deep ecology, but I model them as strict anti-interventionists rather than nature maximisers (or satisficers): isn't it that they value whatever 'the course of things without us' would be?

(They certainly don't mind particular deaths, or particular species extinctions.)

But even if I'm right about that, you're surely right that some would bite the bullet when universal extinction was threatened. Do you know any people who accept that maintaining a 'garden world' is implied by valuing nature in itself?

Comment by technicalities on Existential risk as common cause · 2018-12-08T11:33:47.739Z · score: 1 (1 votes) · EA · GW

Good point, thanks. It's definitely not a knock-down argument.

Comment by technicalities on Existential risk as common cause · 2018-12-08T11:31:54.767Z · score: 2 (2 votes) · EA · GW
what "confidence level" means

Good question. To be honest, it was just me intuiting the chance that all of the premises and exemptions are true, which maybe cashes out to your first option. I'm happy to use a conventional measure, if there's a convention on here.

Would also invite people who disagree to comment.

something like "extinction is less than 1% likely, not because..."

Interesting. This neatly sidesteps Ord's argument (about low extinction probability implying proportionally higher expected value) which I just added, above.

Another objection I missed, which I think is the clincher inside EA, is a kind of defensive empiricism, e.g. Jeff Kaufman:

I'm much more skeptical than most people I talk to, even most people in EA, about our ability to make progress without good feedback. This is where I think the argument for x-risk is weakest: how can we know if what we're doing is helping..?

I take this very seriously; it's why I focus on the ML branch of AI safety. If there is a response to this (excellent) philosophy, it might be that it's equivalent to risk aversion (the bad kind) somehow. Not sure.

Existential risk as common cause

2018-12-05T14:01:04.786Z · score: 24 (22 votes)
Comment by technicalities on The Effective Altruism Newsletter & Open Thread – September 2016 · 2016-10-16T16:11:03.340Z · score: 2 (2 votes) · EA · GW

Hello ChemaCB,

I had a look around and couldn't find too many full peer-reviewed models. (Yet: it's a young endeavour.) This is probably partially a principled reaction to the hard limits of solely quantitative approaches. Most researchers in the area are explicitly calling their work "shallow investigation": i.e. exploratory and pre-theoretical. To date, the empirical FHI papers tend to be piecemeal estimates and early methodological innovation, rather than full models. OpenPhil tends towards prior solicitation from experts and do causes one at a time so far. GiveWell's evaluations are all QALY based and piecemeal, though there's non-core formal stuff on there too.

There's hope: what modelling has been done is always done with economic methods. Michael Dickens has built a model which strikes me as an excellent start, but it's not likely to win over sceptical institutional markers, because it is ex nihilo and doesn't cite anyone. (C++ code here, including weights.) Peter Hurford lists many individual empirical models in footnote 4 here. Here's Gordon Irlam's less formal one, with a wider perspective. Here's a more formal one just for public policy.

To win them over, you could frame it as "social choice theory" rather than cause prioritisation. So for the goal of getting academic approval, Sen, Binmore and Broome are your touchstones here, rather than Cotton-Barrett, Beckstead, and Christiano.

Your particular project proposal seems like an empirical successor to MacAskill's PhD thesis; I'd suggest looking for leads directly in the bibliography there.

I hope you see the above as evidence for the importance of your proposed research, rather than a disincentive to doing it.

Also, welcome!

Comment by technicalities on The Effective Altruism Newsletter & Open Thread – August 2016 · 2016-08-11T16:20:21.585Z · score: 0 (0 votes) · EA · GW

Project seeking analysis and developers:

An app for routing recurring meat offset payments to effective animal orgs. I've made a hackpad for it here.

Big questions for debate here:

  • Is this even an effective way of promoting animal welfare?
    a) would its absolute effect be positive? b) if not, is building it less bad than the counterfactual of an ineffective animal org doing it?
  • Anyone have any inkling of how to estimate the moral licencing cost, to animal welfare?
  • Is anyone doing this already?
Comment by technicalities on June Open Thread · 2015-06-04T16:34:28.191Z · score: 1 (1 votes) · EA · GW

What makes you say that?

His unusual concern for the balance of evidence (e.g. pro-nuclear environmentalism), his animal welfarism, environmental x-risk, and his transparency about his interests. Some examples:

Good idea to pitch directly. I'll draft something to send to him.

Comment by technicalities on June Open Thread · 2015-06-04T14:32:27.679Z · score: 2 (2 votes) · EA · GW

Hi folks,

George Monbiot – an effective altruist in spirit – just wrote a passionate attack on idealists entering 'soulless' industries like finance ^.

As far as self-direction, autonomy and social utility are concerned, many of those who enter these industries and never re-emerge might as well have dropped dead at graduation.

He completely fails to consider the financial discrepancy argument, and dismisses outright the argument from replaceability (reform-from-inside, with an EA financier yielding at least one unit of reform from inside).

I think rebutting this article would be a very good opportunity for promoting e2g. Would anyone be willing and able to get something onto Practical Ethics or the 80,000 Hours blog?

(I'd do it myself, but don't think my name would carry any weight. Is this Forum well-established enough to transfer gravitas onto its writers?)