Comment by technicalities on Existential risk as common cause · 2019-05-19T19:34:10.467Z · score: 3 (1 votes) · EA · GW

Thanks for this. I'm not very familiar with the context, but let me see if I understand. (In a first for me, I'm not sure whether to ask you to cite more scripture or add more formal argument.) Let's assume a Christian god, and call a rational consequence-counting believer an Optimising Christian.

Your overall point is that there are (or might be) two disjoint ethics, one for us and one for God, and that ours has a smaller scope, falling short of long-termism, for obvious reasons. Is this an orthodox view?

1. "The Bible says not to worry, since you can trust God to make things right. Planning is not worrying though. This puts a cap on the intensity of our longterm concern."

2. "Humans are obviously not as good at longtermism as God, so we can leave it to Him."

3. "Classical theism: at least parts of the future are fixed, and God promised us no (more) existential catastrophes. (Via flooding.)"

4. "Optimising Christians don't need to bring (maximally many) people into existence: it's supererogatory." But large parts of Christianity take population increase very seriously as an obligation (based on e.g. Genesis 1:28 or Psalm 127). Do you know of doctrine that Christian universalism stops at present people?

5. "Optimising Christians only need to 'satisfice' their fellows, raising them out of subsistence. Positive consequentialism is for God." This idea has a similar structure to negative utilitarianism, a moral system with an unusual number of philosophical difficulties. Why do bliss or happiness have no / insufficient moral weight? And, theologically: does orthodoxy say we don't need to make others (very) happy?

If I understand you, in your points (1) through (4) you appeal to a notion of God's agency outside of human action or natural laws. (So miracles only?) But a better theology of causation wouldn't rely on miracles, instead viewing the whole causal history of the universe as constituting God's agency. That interpretation, which at least doesn't contradict physics, would keep optimising Christians on the hook for x-risk.

Many of your points are appropriately hedged - e.g. "it might also be God’s job" - but this makes it difficult to read off actions from the claims. (You also appeal to a qualitative kind of Bayesian belief updating, e.g. "significant but not conclusive reason".) Are you familiar with the parliamentary model of ethics? It helps us act even while holding nuanced/confused views - e.g. for the causation question I raised above, each agent could place their own subjective probabilities on occasionalism, fatalism, hands-off theology and so on, and then work out what the decision should be. This kind of analysis could move your post from food-for-thought into a tool for moving through ancient debates and imponderables.

Comment by technicalities on How does one live/do community as an Effective Altruist? · 2019-05-16T20:50:24.359Z · score: 11 (7 votes) · EA · GW

Yudkowsky once officiated at a wedding. I find it quite beautiful.

Comment by technicalities on Which scientific discovery was most ahead of its time? · 2019-05-16T19:50:41.182Z · score: 13 (5 votes) · EA · GW

More engineering than science, but Turing's 1944 'Delilah' system for portable speech encipherment had no equivalent for more than a decade. (His biographer claims "30 years" but I don't know what he's comparing it to.) It was never deployed and was classified by the British government, so it had no impact.

Comment by technicalities on Legal psychedelic retreats launching in Jamaica · 2019-04-19T08:45:38.098Z · score: 3 (2 votes) · EA · GW

People probably won't give those examples here, for civility reasons. The SSC post linked above covers some practices Greg probably means, using historical examples.

Comment by technicalities on Who is working on finding "Cause X"? · 2019-04-11T19:59:41.035Z · score: 14 (8 votes) · EA · GW

One great example is the pain gap / access abyss. Only coined around 2017, got some attention at EA Global London 2017 (?), then OPIS stepped up. I don't think the OPIS staff were doing a cause-neutral search for this (they were founded 2016) so much as it was independent convergence.

Comment by technicalities on What skills would you like 1-5 EAs to develop? · 2019-03-28T19:44:52.059Z · score: 2 (2 votes) · EA · GW

Related news for the suffering engineering idea (but sadly also related for the cognition engineering one).

Comment by technicalities on How to Understand and Mitigate Risk (Crosspost from LessWrong) · 2019-03-14T20:45:49.568Z · score: 4 (3 votes) · EA · GW

I really like this, particularly how the conceptual splits lead to the appropriate mitigations.

The best taxonomy of uncertainty I've ever seen is this great paper by some physicists reflecting on the Great Recession. It's ordinal and gives a bit more granularity to the stats ("Opaque") branch of your tree, and also has a (half-serious) capstone category for catching events beyond reason:

1. "Complete certainty". You are in a Newtonian clockwork universe with no residuals, no observer effects, utterly stable parameters. So, given perfect information, you yield perfect predictions.
2. "Risk without uncertainty". You know a probability distribution for an exhaustive set of outcomes. No statistical inference needed. This is life in a hypothetical honest casino, where the rules are transparent and always followed. This situation bears little resemblance to financial markets.
3. "Fully Reducible Uncertainty". There is one probability distribution over a set of known outcomes, but parameters are unknown. Like an honest casino, but one in which the odds are not posted and must therefore be inferred from experience. In broader terms, fully reducible uncertainty describes a world in which a single model generates all outcomes, and this model is parameterized by a finite number of unknown parameters that do not change over time and which can be estimated with an arbitrary degree of precision given enough data. As sample size increases, classical inference brings this down to level 2.
4. "Partially Reducible Uncertainty". The distribution generating the data changes too frequently or is too complex to be estimated, or it consists in several nonperiodic regimes. Statistical inference cannot ever reduce this uncertainty to risk. Four sources:
(1) stochastic or time-varying parameters that vary too frequently to be estimated accurately;
(2) nonlinearities too complex to be captured by existing models, techniques, and datasets;
(3) non-stationarities and non-ergodicities that render useless the Law of Large Numbers, Central Limit Theorem, and other methods of statistical inference and approximation;
and (4) the dependence on relevant but unknown and unknowable conditioning information...
5. "Irreducible uncertainty". Ignorance so complete that it cannot be reduced using data: no distribution, so no success in risk management. Such uncertainty is beyond the reach of probabilistic reasoning, statistical inference, and any meaningful quantification. This type of uncertainty is the domain of philosophers and religious leaders, who focus on not only the unknown, but the unknowable.
Comment by technicalities on Suffering of the Nonexistent · 2019-03-03T10:33:11.684Z · score: 2 (2 votes) · EA · GW

There's a small, new literature analysing the subset of nonexistence I think you mean, under the name "impossible worlds". (The authors have no moral or meta-ethical aims.) It might help to use their typology of impossible situations: Impossible Ways vs Logic Violators vs Classical Logic Violators vs Contradiction-Realizers.

To avoid confusion, consider 'necessarily-nonexistent' or 'impossible moral patients' or some new coinage like that, instead of just 'nonexistent beings' otherwise people will think you're talking about the old Nonidentity Problem.

I think you'll struggle to make progress, because the intuition that only possible people can be moral patients is so strong, stronger than the one about electrons or microbial life and so on. In the absence of positive reasons (rather than just speculative caution), the project can be expected to move attention away from moral patients to nonpatients - at least, your attention.

Meta: If you don't want to edit out the thirteen paragraphs of preamble, maybe add a biggish summary paragraph at the top; the first time I read it (skimming, but still) I couldn't find the proposition.

Comment by technicalities on Existential risk as common cause · 2019-03-01T15:34:08.128Z · score: 1 (1 votes) · EA · GW

Ah I see. Agreed - thanks for clarifying.

Comment by technicalities on What skills would you like 1-5 EAs to develop? · 2019-03-01T15:28:50.980Z · score: 7 (5 votes) · EA · GW

Re: CSR. George Howlett started Effective Workplace Activism a couple of years ago, but it didn't take off that much. Their handbook is useful.

I tried quite hard to change my large corporation's charity selection process (maybe 50 hours' work), but found the stubborn localism and fuzzies-orientation impossible to budge (for someone of my persuasiveness and seniority).

Comment by technicalities on Three Biases That Made Me Believe in AI Risk · 2019-02-19T20:11:17.462Z · score: 3 (2 votes) · EA · GW

Example of a practical benefit from taking the intentional stance: this (n=116) study of teaching programming by personalising the editor:

http://faculty.washington.edu/ajko/papers/Lee2011Gidget.pdf

Comment by technicalities on Why do you reject negative utilitarianism? · 2019-02-17T19:27:30.152Z · score: 2 (2 votes) · EA · GW

Re: 2. Here's a few.

Comment by technicalities on Existential risk as common cause · 2019-02-15T10:38:23.191Z · score: 1 (1 votes) · EA · GW

True - but how many people hold these inverses to be their primary value? (That is, I think the argument above is useful because almost everyone has something in the Goods set.)

Comment by technicalities on Existential risk as common cause · 2019-02-13T00:31:25.143Z · score: 2 (2 votes) · EA · GW

I mean that the end of the world isn't a bad outcome to someone who only values the absence of suffering, and who is perfectly indifferent between all 'positive' states. (This is Ord's definition of absolute NU, so I don't think I'm straw-manning that kind.) And if something isn't bad (and doesn't prevent any good), a utilitarian 'doesn't have to work on it' in the sense that there's no moral imperative to.

(1) That makes sense. But there's an escalation problem: worse risk is better to ANU (see below).

(2) One dreadful idea is that self-replicators would do the anti-suffering work, obviating the need for sentient guardians, but I see what you're saying. Again though, this uncertainty about moral patients licences ANU work on x-risks to humans... but only while moving the degenerate 'solution' upward, to valuing risks that destroy more classes of candidate moral patients. At the limit, the end of the entire universe is indisputably optimal to an ANU. So you're right about Earth x-risks (which is mostly all people talk about) but not for really farout scifi ones, which ANU seems to value.

Actually this degenerate motion might change matters practically: it seems improbable that it'd be harder to remove suffering with biotechnology than to destroy everything. Up to you if you're willing to bite the bullet on the remaining theoretical repugnance.

(To clarify, I think basically no negative utilitarian wants this, including those who identify with absolute NU. But that suggests that their utility function is more complex than they let on. You hint at this when you mention valuing an 'infinite game' of suffering alleviation. This doesn't make sense on the ANU account, because each iteration can only break even (not increase suffering) or lose (increase suffering).)

Most ethical views have degenerate points in them, but valuing the greatest destruction equal to the greatest hedonic triumph is unusually repugnant, even among repugnant conclusions.

I don't think instrumentally valuing positive states helps with the x-risk question, because they get trumped by a sufficiently large amount of terminal value, again e.g. the end of all things.

(I'm not making claims about other kinds of NU.)

[Link] The option value of civilization

2019-01-06T09:58:17.919Z · score: 0 (3 votes)
Comment by technicalities on Existential risk as common cause · 2018-12-09T15:29:13.032Z · score: 4 (3 votes) · EA · GW

No idea, sorry. I know CSER have held at least one workshop about Trump and populism, so maybe try Julius Weitzdoerfer:

[Trump] will make people aware that they have to think about risks, but, in a world where scientific evidence isn't taken into account, all the threats we face will increase.
Comment by technicalities on Existential risk as common cause · 2018-12-09T15:17:36.405Z · score: 1 (1 votes) · EA · GW

You're right. I think I had in mind 'AI and nanotech' when I said that.

Comment by technicalities on Existential risk as common cause · 2018-12-08T11:40:01.325Z · score: 5 (4 votes) · EA · GW

I haven't read much deep ecology, but I model them as strict anti-interventionists rather than nature maximisers (or satisficers): isn't it that they value whatever 'the course of things without us' would be?

(They certainly don't mind particular deaths, or particular species extinctions.)

But even if I'm right about that, you're surely right that some would bite the bullet when universal extinction was threatened. Do you know any people who accept that maintaining a 'garden world' is implied by valuing nature in itself?

Comment by technicalities on Existential risk as common cause · 2018-12-08T11:33:47.739Z · score: 1 (1 votes) · EA · GW

Good point, thanks. It's definitely not a knock-down argument.

Comment by technicalities on Existential risk as common cause · 2018-12-08T11:31:54.767Z · score: 3 (3 votes) · EA · GW
what "confidence level" means

Good question. To be honest, it was just me intuiting the chance that all of the premises and exemptions are true, which maybe cashes out to your first option. I'm happy to use a conventional measure, if there's a convention on here.

Would also invite people who disagree to comment.

something like "extinction is less than 1% likely, not because..."

Interesting. This neatly sidesteps Ord's argument (about low extinction probability implying proportionally higher expected value) which I just added, above.

Another objection I missed, which I think is the clincher inside EA, is a kind of defensive empiricism, e.g. Jeff Kaufman:

I'm much more skeptical than most people I talk to, even most people in EA, about our ability to make progress without good feedback. This is where I think the argument for x-risk is weakest: how can we know if what we're doing is helping..?

I take this very seriously; it's why I focus on the ML branch of AI safety. If there is a response to this (excellent) philosophy, it might be that it's equivalent to risk aversion (the bad kind) somehow. Not sure.

Existential risk as common cause

2018-12-05T14:01:04.786Z · score: 28 (26 votes)
Comment by technicalities on The Effective Altruism Newsletter & Open Thread – September 2016 · 2016-10-16T16:11:03.340Z · score: 2 (2 votes) · EA · GW

Hello ChemaCB,

I had a look around and couldn't find too many full peer-reviewed models. (Yet: it's a young endeavour.) This is probably partially a principled reaction to the hard limits of solely quantitative approaches. Most researchers in the area are explicitly calling their work "shallow investigation": i.e. exploratory and pre-theoretical. To date, the empirical FHI papers tend to be piecemeal estimates and early methodological innovation, rather than full models. OpenPhil tends towards prior solicitation from experts and do causes one at a time so far. GiveWell's evaluations are all QALY based and piecemeal, though there's non-core formal stuff on there too.

There's hope: what modelling has been done is always done with economic methods. Michael Dickens has built a model which strikes me as an excellent start, but it's not likely to win over sceptical institutional markers, because it is ex nihilo and doesn't cite anyone. (C++ code here, including weights.) Peter Hurford lists many individual empirical models in footnote 4 here. Here's Gordon Irlam's less formal one, with a wider perspective. Here's a more formal one just for public policy.

To win them over, you could frame it as "social choice theory" rather than cause prioritisation. So for the goal of getting academic approval, Sen, Binmore and Broome are your touchstones here, rather than Cotton-Barrett, Beckstead, and Christiano.

Your particular project proposal seems like an empirical successor to MacAskill's PhD thesis; I'd suggest looking for leads directly in the bibliography there.

I hope you see the above as evidence for the importance of your proposed research, rather than a disincentive to doing it.

Also, welcome!

Comment by technicalities on The Effective Altruism Newsletter & Open Thread – August 2016 · 2016-08-11T16:20:21.585Z · score: 0 (0 votes) · EA · GW

Project seeking analysis and developers:

An app for routing recurring meat offset payments to effective animal orgs. I've made a hackpad for it here.

Big questions for debate here:

  • Is this even an effective way of promoting animal welfare?
    a) would its absolute effect be positive? b) if not, is building it less bad than the counterfactual of an ineffective animal org doing it?
  • Anyone have any inkling of how to estimate the moral licencing cost, to animal welfare?
  • Is anyone doing this already?
Comment by technicalities on June Open Thread · 2015-06-04T16:34:28.191Z · score: 1 (1 votes) · EA · GW

What makes you say that?

His unusual concern for the balance of evidence (e.g. pro-nuclear environmentalism), his animal welfarism, environmental x-risk, and his transparency about his interests. Some examples:

Good idea to pitch directly. I'll draft something to send to him.

Comment by technicalities on June Open Thread · 2015-06-04T14:32:27.679Z · score: 2 (2 votes) · EA · GW

Hi folks,

George Monbiot – an effective altruist in spirit – just wrote a passionate attack on idealists entering 'soulless' industries like finance ^.

As far as self-direction, autonomy and social utility are concerned, many of those who enter these industries and never re-emerge might as well have dropped dead at graduation.

He completely fails to consider the financial discrepancy argument, and dismisses outright the argument from replaceability (reform-from-inside, with an EA financier yielding at least one unit of reform from inside).

I think rebutting this article would be a very good opportunity for promoting e2g. Would anyone be willing and able to get something onto Practical Ethics or the 80,000 Hours blog?

(I'd do it myself, but don't think my name would carry any weight. Is this Forum well-established enough to transfer gravitas onto its writers?)