Posts

Tips for overcoming low back pain 2020-03-24T16:41:24.383Z · score: 50 (21 votes)

Comments

Comment by magnusvinding on What analysis has been done of space colonization as a cause area? · 2019-10-12T10:20:22.861Z · score: 1 (1 votes) · EA · GW

You're welcome! :-)

Whether this is indeed a dissenting view seems unclear. Relative to the question of how space expansion would affect x-risk, it seems that environmentalists (of whom there are many) tend to believe it would increase such risks (though it's of course debatable how much weight to give their views). Some highly incomplete considerations can be found here: https://en.wikipedia.org/wiki/Space_colonization#Objections

The sentiment expressed in the following video by Bill Maher, i.e. that space expansion is a "dangerous idea" at this point, may well be shared by many people on reflection: https://www.youtube.com/watch?v=mrGFEW2Hb2g

One may say similar things in relation to whether it's a dissenting view on space expansion as a cause (even if we hold x-risk constant). For example, space expansion would most likely increase total suffering in expectation — see https://reducing-suffering.org/omelas-and-space-colonization/ — and one (probably unrepresentative) survey found that a significant plurality of people favored "minimizing suffering" as the ideal goal a future civilization should strive for: https://futureoflife.org/superintelligence-survey/.

Interestingly, the same survey also found that the vast majority of people want life to spread into space, which appears inconsistent with the plurality preference for minimizing suffering. An apparent case of (many) people's preferences contradicting themselves, at least in terms of the likely implications of these preferences.

Comment by magnusvinding on What analysis has been done of space colonization as a cause area? · 2019-10-10T16:10:54.879Z · score: 7 (6 votes) · EA · GW

Some have argued that space colonization would increase existential risks. Here is political scientist Daniel Deudney, whose book Dark Skies is supposed to be published by OUP this fall:

Once large scale expansion into space gets started, it will be very difficult to stop. My overall point is that we should stop viewing these ambitious space expansionist schemes as desirable, even if they are not yet feasible. Instead we should see them as deeply undesirable, and be glad that they are not yet feasible.[…] Space expansion may indeed be inevitable, but we should view this prospect as among the darkest technological dystopias. Space expansion should be put on the list of catastrophic and existential threats to humanity, and not seen as a way [to] solve or escape from them.

Quoted from: http://wgresearch.org/an-interview-with-daniel-h-deudney/

See also:

https://www.youtube.com/watch?v=6D09e6igS4o

https://docs.wixstatic.com/ugd/d9aaad_5c9b881731054ee8bca5fd30699e7df9.pdf

http://nautil.us/blog/-why-we-should-think-twice-about-colonizing-space

Regardless of one's values, it seems worth exploring the likely outcomes of space expansion in depth before pursuing it.

Comment by magnusvinding on How much EA analysis of AI safety as a cause area exists? · 2019-09-28T11:01:58.234Z · score: 1 (1 votes) · EA · GW

Thanks for the stab, Anthony. It's fairly fair. :-)

Some clarifying points:

First, I should note that my piece was written from the perspective of suffering-focused ethics.

Second, I would not say that "investment in AI safety work by the EA community today would only make sense if the probability of AI-catalyzed GCR were decently high". Even setting aside the question of what "decently high" means, I would note that:

1) Whether such investments in AI safety make sense depends in part on one's values. (Though another critique I would make is that "AI safety" is less well-defined than people often seem to think: https://magnusvinding.com/2018/12/14/is-ai-alignment-possible/, but more on this below.)

2) Even if "the probability of AI-catalyzed GCR" were decently high — say, >2 percent — this would not imply that one should focus on "AI safety" in a standard narrow sense (roughly: constructing the right software), nor that other risks are not greater in expectation (compared to the risks we commonly have in mind when we think of "AI-catalyzed catastrophic risks").

You write of "scenarios in which AGI becomes a catastrophic threat". But a question I would raise is: what does this mean? Do we all have a clear picture of this in our minds? This sounds to me like a rather broad class of scenarios, and a worry I have is that we all have "poorly written software" scenarios in mind, although such scenarios could well comprise a relatively narrow subset of the entire class that is "catastrophic scenarios involving AI".

Zooming out, my critique can be crudely summarized as a critique of two significant equivocations that I see doing an exceptional amount of work in many standard arguments for "prioritizing AI".

First, there is what we may call the AI safety equivocation (or motte and bailey): people commonly fail to distinguish between 1) a focus on future outcomes controlled by AI and 2) a focus on writing "safe" software. Accepting that we should adopt the former focus by no means implies we should adopt the latter. By (imperfect) analogy, to say that we should focus on future outcomes controlled by humans does not imply that we should focus primarily on writing safe human genomes.

The second is what we may call the intelligence equivocation, which is the one you described. We operate with two very different senses of the term "intelligence", namely 1) the ability to achieve goals in general (derived from Legg & Hutter, 2007), and 2) "intelligence" in the much narrower sense of "advanced cognitive abilities", roughly equivalent to IQ in humans.

These two are often treated as virtually identical, and we fail to appreciate the rather enormous difference between them, as argued in/evident from books such as The Knowledge Illusion: Why We Never Think Alone, The Ascent of Man, The Evolution of Everything, and The Secret of Our Success. This was also the main point in my Reflections on Intelligence.

Intelligence2 lies all in the brain, whereas intelligence1 includes the brain and so much more, including all the rest of our well-adapted body parts (vocal cords, hands, upright walk — remove just one of these completely in all humans and human civilization is likely gone for good). Not to mention our culture and technology as a whole, which is the level at which our ability to achieve goals at a significant level really emerges: it derives not from any single advanced machine but from our entire economy. A vastly greater toolbox than what intelligence2 covers.

Thus, to assume that we by boosting intelligence2 to vastly super-human levels necessarily get intelligence1 at a vastly super-human level is a mistake, not least since "human-level intelligence1" already includes vastly super-human intelligence2 in many cognitive domains.

Comment by magnusvinding on How much EA analysis of AI safety as a cause area exists? · 2019-09-19T17:26:43.665Z · score: 1 (1 votes) · EA · GW

In brief: the less of a determinant specific AGI structure is of future outcomes, the less relevant/worthy of investment it is.

Comment by magnusvinding on How much EA analysis of AI safety as a cause area exists? · 2019-09-16T14:57:50.991Z · score: 2 (2 votes) · EA · GW

Interesting posts. Yet I don't see how they support that what I described is unlikely. In particular, I don't see how "easy coordination" is in tension with what I wrote.

To clarify, competition that determines outcomes can readily happen within a framework of shared goals, and as instrumental to some overarching final goal. If the final goal is, say, to maximize economic growth (or if that is an important instrumental goal), this would likely lead to specialization and competition among various agents that try out different things, and which, by the nature of specialization, have imperfect information about what other agents know (not having such specialization would be much less efficient). In this, a future AI economy would resemble ours more than far-mode thinking suggests (this does not necessarily contradict your claim about easier coordination, though).

A reason I consider what I described likely is not least that I find it more likely that future software systems will consist in a multitude of specialized systems with quite different designs, even in the presence of AGI, as opposed to most everything being done by copies of some singular AGI system. This "one system will take over everything" strikes me as far-mode thinking, and not least unlikely given the history of technology and economic growth. I've outlined my view on this in the following e-book (though it's a bit dated in some ways): https://www.smashwords.com/books/view/655938 (short summary and review by Kaj Sotala: https://kajsotala.fi/2017/01/disjunctive-ai-scenarios-individual-or-collective-takeoff/)



Comment by magnusvinding on How much EA analysis of AI safety as a cause area exists? · 2019-09-15T15:24:25.221Z · score: 15 (6 votes) · EA · GW

Thanks for sharing and for the kind words. :-)

I should like to clarify that I also support FRI's approach to reducing AI s-risks. The issue is more how big a fraction of our resources approaches of this kind deserve relative to other things. My view is that, relatively speaking, we very much underinvest in addressing other risks, by which I roughly mean "risks not stemming primarily from FOOM or sub-optimally written software" (which can still involve AI plenty, of course). I would like to see a greater investment in broad explorative research on s-risk scenarios and how we can reduce them.

In terms of explaining the (IMO) skewed focus, it seems to me that we mostly think about AI futures in far mode, see https://www.overcomingbias.com/2010/06/near-far-summary.html and https://www.overcomingbias.com/2010/10/the-future-seems-shiny.html. The perhaps most significant way in which this shows is that we intuitively think the future will be determined by a single or a few agents and what they want, as opposed to countless different agents, cooperating and competing with many (for those future agents) non-intentional factors influencing the outcomes.

I'd argue scenarios of the latter kind are far more likely given not just the history of life and civilization, but also in light of general models of complex systems and innovation (variation and specialization seem essential, and the way these play out is unlikely to conform to a singular will in anything like the neat way far mode would portray it). Indeed, I believe such a scenario would be most likely to emerge even if a single universal AI ancestor took over and copied itself (specialization would be adaptive, and significant uncertainty about the exact information and (sub-)aims possessed by conspecifics would emerge).

In short, I think we place too much weight on simplistic toy models of the future, in turn neglecting scenarios that don't conform neatly to these, and the ways these could come about.

Comment by magnusvinding on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-10T14:58:30.906Z · score: 3 (2 votes) · EA · GW
That's why the very first words of my comment were "I don't identify as a utilitarian."

I appreciate that, and as I noted, I think this is fine. :-)

I just wanted to flag this because it took me some time to clarify whether you were replying based on 1) moral uncertainty/other frameworks, or 2) instrumental considerations relative to pure utilitarianism. I first assumed you were replying based on 2) (as Brian suggested), and I believe many others reading your answer might draw the same conclusion. But a closer reading made it clear to me you were primarily replying based on 1).

Comment by magnusvinding on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-08T14:11:35.967Z · score: 7 (8 votes) · EA · GW
The contractarian (and commonsense and pluralism, but the theory I would most invoke for theoretical understanding is contractarian) objection to such things greatly outweighs the utilitarian case.

It is worth noting that this is not, as it stands, a reply available to a pure traditional utilitarian.

failing to leave one galaxy, let alone one solar system for existing beings out of billions of galaxies would be ludicrously monomaniacal and overconfident

But a relevant question here is whether that also holds true given a purely utilitarian view, as opposed to, say, from a perspective that relies on various theories in some notional moral parliament.

It is, of course, perfectly fine to respond to the question "how do most utilitarians feel about X?" by saying "I'm not a utilitarian, but I am sympathetic to it, and here is how someone sympathetic to utilitarianism can reply by relying on other moral frameworks". But then it's worth being clear that the reply is not a defense of pure traditional utilitarianism — quite the contrary.

Comment by magnusvinding on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-07T09:37:20.632Z · score: 8 (8 votes) · EA · GW

Thanks for posting this, Richard. :-)

I think it is worth explaining what Knutsson's argument in fact is.

His argument is not that the replacement objection against traditional/classical utilitarianism (TU) is plausible. Rather, the argument is that the replacement objection against TU (as well as other consequentialist views it can be applied to, such as certain prioritarian views) is roughly as plausible as the world destruction argument is against negative utilitarianism (NU). And therefore, if one rejects NU and favors TU, or a similarly "replacement vulnerable" view, because of the world destruction argument, one must explain why the replacement argument is significantly less problematic for these other views.

That is, if one rejects such thought experiments in the case of TU and similar views because 1) endorsing or even entertaining such an idea would be sub-optimal in the bigger picture for cooperation reasons, 2) because it would be overconfident to act on it even if one finds the underlying theory to be the most plausible one, 3) because it leaves out "consideration Y", 4) because it seems like a strawman on closer examination, Knutsson's point is that one can make similar points in the case of NU and world destruction with roughly equal plausibility.

As Knutsson writes in the abstract:

>The world destruction argument is not a reason to reject negative utilitarianism in favour of these other forms of consequentialism, because there are similar arguments against such theories that are at least as persuasive as the world destruction argument is against negative utilitarianism.



Comment by magnusvinding on Critique of Superintelligence Part 2 · 2019-06-20T13:20:06.848Z · score: 7 (4 votes) · EA · GW

Thanks for writing this. :-)

Just a friendly note: even as someone who largely agrees with you, I must say that I think a term like "absurd" is generally worth avoiding in relation to positions one disagrees with (I also say this as someone who is guilty of having used this term in similar contexts before).

I think it is better to use less emotionally-laden terms, such as "highly unlikely" or "against everything we have observed so far", not least since "absurd" hardly adds anything of substance beyond what these alternatives can capture.

To people who disagree strongly with one's position, "absurd" will probably not be received so well, or at any rate optimally. It may also lead others to label one as overconfident and incapable of thinking clearly about low-probability events. And those of us who try to express skepticism of the kind you do here already face enough of a headwind from people who shake their heads while thinking to themselves "they clearly just don't get it".


Other than that, I'm keen to ask: are you familiar with my book Reflections on Intelligence? It makes many of the same points that you make here. The same is true of many of the (other) resources found here: https://magnusvinding.com/2017/12/16/a-contra-ai-foom-reading-list/

Comment by magnusvinding on 1. What Is Moral Realism? · 2018-06-06T08:05:58.457Z · score: 1 (1 votes) · EA · GW

Thanks for your reply :-)

For instance, I don't understand how [open individualism] differs from empty individualism. I'd understand if these are different framings or different metaphores, but if we assume that we're talking about positions that can be true or false, I don't understand what we're arguing about when asking whether open individualism or true, or when discussing open vs. empty individualism.

I agree completely. I identify equally as an open and empty individualist. As I've written elsewhere (in You Are Them): "I think these “positions” are really just two different ways of expressing the same truth. They merely define the label of “same person” in different ways."

Also, I think it's perfectly coherent to have egoistic goals even under a reductionist view of personal identity.

I guess it depends on what those egoistic goals are. The fact that some egoistic goals are highly instrumentally useful for the benefit of others (even if one doesn't intend to benefit others, cf. Smith's invisible hand, the deep wisdom of Ayn Rand, and also, more generally, the fact that many of our selfish desires probably shouldn't be expected to be that detrimental to others, or at least our in-group, given that we evolved as social creatures) is, I think, a confounding factor that makes it seem plausible to say that pursuing them is coherent/non-problematic (in light of what you call a reductionist view of personal identity). Yet if it is transparent that the pursuit of these egoistic goals comes at the cost of many other beings' intense suffering, I think we would be reluctant to say that pursuing them is "perfectly coherent" (especially in light of such a view of personal identity, yet many would probably even say it regardless; one can, for example, also argue it is incoherent with reference to inconsistency: "we should not treat the same/sufficiently similar entities differently"). For instance, would we, with this view of personal identity, really claim that it is "perfectly coherent" to choose to push button A: "you get a brand new pair of shorts", when we could have pushed button B: "You prevent 100 years of torture (for someone else in one sense, yet for yourself in another, quite real sense) which will not be prevented if you push button A". It seems much more plausible to deem it perfectly coherent to have a selfish desire to start a company or to signal coolness or otherwise gain personal satisfaction by being an effective altruist.

But if that's all we mean by "moral realism" then it would be rather trivial.

I don't quite understand why you would call this trivial. Perhaps it is trivial that many of us, perhaps even the vast majority, agree. Yet, as mentioned, the acceptance of a principle like "avoid causing unnecessary suffering" is extremely significant in terms of its practical implications; many have argued that it implies the adoption of veganism (where the effects on wildlife as a potential confounding factor is often disregarded, of course), and one could even employ it to argue against space colonization (depending on what we hold to constitute necessity). So, in terms of practical consequences at least, I'm almost tempted to say that it could barely be more significant. And it's not clear to me that agreement on a highly detailed axiology would necessarily have significantly more significant, or even more clear, implications than what we could get off the ground from quite crude principles (it seems to me there may well be strong diminishing returns here, if you will, as you can also seem to agree weakly with in light of the final sentence of your reply). Also because the large range of error produced by empirical uncertainty may, on consequentialist views at least, make the difference in practice between realizing a detailed and a crude axiology a lot less clear than the difference between the two axiologies at the purely theoretical level -- perhaps even so much so as to make it virtually vanish in many cases.

Maybe my criteria are a bit too strict [...]

I'm just wondering: too strict for what purpose?

This may seem a bit disconnected, but I just wanted to share an analogy I just came to think of: Imagine mathematics were a rather different field where we only agreed about simple arithmetic such as 2 + 2 = 4, and where everything beyond that were like the Riemann hypothesis: there is no consensus, and clear answers appear beyond our grasp. Would we then say that our recognition that 2 + 2 = 4 holds true, at least in some sense (given intuitive axioms, say), is trivial with respect to asserting some form of mathematical realism? And would finding widely agreed-upon solutions to our harder problems constitute a significant step toward deciding whether we should accept such a realism? I fail to see how it would.

Comment by magnusvinding on 1. What Is Moral Realism? · 2018-06-04T12:29:13.142Z · score: 1 (1 votes) · EA · GW

Thanks for writing this, Lukas. :-)

As a self-identified moral realist, I did not find my own view represented in this post, although perhaps Railton’s naturalist position is the one that comes the closest. I can identify both as an objectivist, a constructivist, and a subjectivist, indeed even a Randian objectivist. It all rests on what the nature of the ill-specified “subject” in question is. If one is an open individualist, then subjectivism and objectivism will, one can argue, collapse into one. According to open individualism, the adoption of Randianism (or, in Sidgwick’s terminology, “rational egoism”) implies that we should do what is best for all sentient beings. In other words, subjectivism without indefensibly demarcated subjects (or at least subjects whose demarcation is not granted unjustifiable metaphysical significance) is equivalent with objectivism. Or so I would argue.

As for Moore’s open question argument (which I realize was not explored in much depth here), it seems to me, as has been pointed out by others, that there can be an ontological identity between that which different words refer to even if these words are not commonly reckoned strictly synonymous. For example: Is water the same as H2O? Is the brain the mind? These questions are hardly meaningless, even if we think the answer to both questions is yes. Beyond that, one can also defend the view that “the good” is a larger set of which any specific good thing we can point to is merely a subset, and hence the question can also make sense in this way (i.e. it becomes a matter of whether something is part of “the good”).

To turn the tables a bit here, I would say that to reject moral realism, on my account, one would need to say that there is no genuine normative force or property in, say, a state of extreme suffering (consider being fried in a brazen bull for concreteness). [And I think one can fairly argue that to say such a state has “genuine normative force” is very much an understatement.]

“Normative for the experiencing subject or for all agents?” one may then ask. Yet on my account of personal identity, the open individualist account (cf. https://en.wikipedia.org/wiki/Open_individualism and https://www.smashwords.com/books/view/719903), there is no fundamental distinction, and thus my answer would simply be: yes, for the experiencing subject, and hence for all agents (this is where our intuitions scream, of course, unless we are willing to suspend our strong, Darwinianly adaptive sense of self as some entity that rides around in some small part of physical reality). One may then object that different agents occupy genuinely different coordinates in spacetime, yet the same can be said of what we usually consider the same agent. So there is really no fundamental difference here: If we say that it is genuinely normative for Tim at t1 (or simply Tim1) to ensure that Tim at t2 (or simply Tim2) suffers less, then why wouldn’t the same be true of Tim1 with respect to John1, 2, 3…?

With respect to the One Compelling Axiology you mention, Lukas, I am not sure why you would set the bar so high in terms of specificity in order to accept a realist view. I mean, if “all philosophers or philosophically-inclined reasoners” found plausible a simple, yet inexhaustive principle like “reduce unnecessary suffering” why would that not be good enough to demonstrate its "realism" (on your account) when a more specific one would? It is unclear to me why greater specificity should be important, especially since even such an unspecific principle still would have plenty of practical relevance (many people can admit that they are not living in accordance with this principle, even as they do accept it).