Posts

Why those who care about catastrophic and existential risk should care about autonomous weapons 2020-11-11T17:27:01.323Z

Comments

Comment by aaguirre on Why those who care about catastrophic and existential risk should care about autonomous weapons · 2020-11-24T17:02:24.983Z · EA · GW

Thanks for your replies here, and for your earlier longer posts that were helpful in understanding the skeptical side of the argument, even if I only saw them after writing my piece. As replies to some of your points above:

But it means that banning AWSs altogether would be harmful, as it would involve sacrificing this opportunity. We don't want to lay the groundwork for a ban on AGI, we want to lay the groundwork for safe, responsible development

It is unclear to me what you suggest we would be “sacrificing" if militaries did not have the legal opportunity to use lethal AWS. The opportunity I see is to make decisions, in a globally coordinated way and amongst potentially adversarial powers, about acceptable and unacceptable delegations of human decisions to machines, and enforcing those decisions. I can’t see how success in doing so would sacrifice the opportunity. Moreover, a ban on all autonomous weapons (including purely defensive nonlethal ones) is very unlikely and not really what anyone is calling for, so there will be plenty of opportunity to “practice” on non-lethal AWs, defenses against AWs, etc., on the technical front; there will also be other opportunities to "practice" on what life-and-death decisions should. and should not be delegated, for example in judicial review.

Have we learned much from the Ottawa Treaty (which technically prohibits a certain class of AWS) that will help us with AGI coordination? I don't know. Maybe

Though I understand why you have drawn a connection to the Ottawa Treaty because of its treatment on landmines, I believe this is the wrong analogy for AWSs. I believe the Biological Weapons Convention is more apt, and I think the answer would be "yes," we have learned something about international governance and coordination for dangerous technology from the BWC. I also believe that the agreement not to use landmines is a global good.

Surely it would easier to achieve wins on coordinating issues like civilian AI, supercomputing, internet connectivity, or many other tech governance issues which affect military (and other) powers?.

I am not sure why you are confident it would be easier to reach binding agreements on these suggested matters. To the extent that it is possible, it may suggest that there is little value to be gained. What is generally missing from these is that there is little popular or political will to create an international agreement on e.g. internet connectivity. It’s not as high stakes or consequential as lethal AWSs, and to first approximation, nobody cares. The point is to show agreement can be reached in an arena that is consequential for militaries, and this is our best opportunity to do so.

Different practitioners will select different answers to the moral questions that you raise, and the burden of argument is on you to show that we should expect practitioners to pick wrong answers that will make AWSs less ethical than the alternatives.

There are a lot of important and difficult moral questions worth a long discussion, as well as more practical questions of whether systems and chains-of-command are in fact created in a way that responsibility rests somewhere rather than nowhere. I've got my own beliefs on those, which may or may not be shared, but I actually don't think we need to address them to judge the importance of limitations on autonomous weapons. I don't necessarily agree that the burden is on me, though: it's certainly both legally (and I believe ethically) "your" responsibility, if you are creating a new system for killing people, you show that it is consistent with international law, for example.

At this point, it's been three years since FLI released their slaughterbots video, and despite all the talk of how it is cheap and feasible with currently available or almost-available technology, I don't think anyone is publicly developing such drones - suggesting it's really not so easy or useful

At the time of release, Slaughterbots was meant to be speculative and to raise awareness of the prospect of risk. AGI and a full scale nuclear war haven't happened either--that doesn't make the risk not real. Would you lodge the same complaint against “The Day After”? Regardless, as to whether people are developing such drones, I suggest you review information in a report called "Slippery Slope" by PAX on such systems, especially about the Kargu drones from Turkey. I think you will decide that it is relatively “easy” and “useful” to develop lethal AWSs.

Responding to paragraphs starting with “A mass drone swarm terror attack…” through the paragraph starting with “Now, obviously this could…” Your analysis here is highly speculative and presupposes a particular pattern in the development of offensive and defensive capabilities of lethal AWSs. I welcome any evidence you have on these points, but your scenario seems to a) assume limited offensive capability development, b) willingness and ability to implement layers of defensive measures at all “soft” targets, c) focus only on drones, not many other possible lethal AWSs, and d) still produces considerable amount of cost--both in countermeasures and in psychological costs--that would seem to suggest a steep price to be paid to have lethal AWSs even in a rosy scenario.

Finally, an order of hundreds of thousands of drones, designed as fully autonomous killing machines, is quite industrially significant. It's just not something that a nonstate actor can pull off. And the idea that the military would directly construct mass murder drones and then lose them to terrorists is not realistic.

I believe we agree that in terms of serious (like 1000+ casualties) WMDs, the far greater risk is smaller state actors producing or buying them, not a rogue terror organization. As a reminder, it won’t (just) be the military making these weapons, but weapons makers who can then sell them (e.g., look at the export of drones by China and Turkey throughout many high-conflict regions). Further, once produced or sold to a state actor, weapons can and do then come into the possession of rogue actors, including WMDs. Look no further than the history of the Nunn-Lugar Cooperative Threat Reduction program for real and close-calls, the transfer of weapons from Syria to Hezbollah, etc.

I don't think the history of armed conflict supports the view that people become much more willing to go to war when their weapons become more precise.

It may or may not be the case; as you indicate it's mixed in with a lot of factors. But precision (and lack of infrastructure destruction) are actually not the only, or even primary reasons I expect AWs will lead to wider conflict, depending on the context. In addition to potentially being more precise, lethal AWSs will be less attributable to their source, and present less risk to use (both in physical and financial costs). At least in terms of violence (if not, to date, war), the latter seems to make a large difference, as exhibited by the US (manned) drone program for example.

The mean expectations are closer to the lower ends of these ranges.

I'm not sure how to interpret this. The lower end of the ranges are the lower end of ranges given by various estimators. The mean of this range is somewhere in the middle, depending how you weight them.

The question of whether small-scale conflicts will increase enough to counterbalance the life-saving of substituting AWs for soldiers is, I agree, hard predict. But unless you take the optimistic end of the spectrum (as I guess you have) I don't see how the numbers can balance at all when including large-scale wars.

Someone (maybe me) should take a hard look at these recent arguments you cite claiming increases in escalation risk. The track record for speculation on the impacts of new military tech is not good so it needs careful vetting.

I welcome your investigation. I agree that speculation on the impacts of new military tech has not been great (along all spectrums), which is why precaution is a wise course of action.

As the absolute minimum to address #3, I think advocacy on AWSs should be compared to advocacy on other new military tech like hypersonics and AI-enabled cyber weapons which come with their own fair share of similar worries.

I agree that other emerging technologies (including some you don’t mention, like synthetic bioweapons), deserve greater attention. But that doesn’t mean lethal AWSs should be ignored.

If you stigmatize them in the Anglosphere popular imagination as a precursor to a multilateral agreement, then that's basically what you're doing.

This is a very strange argument to me. Saying something is problematic, and being willing in principle not to do it, seems like a pretty necessary precursor to making an agreement with others not to do it. Moreover, if something is ethically wrong, we should be willing to not do it even if others do it — but far, far better to enter into an agreement so that they don't.

Comment by aaguirre on Why those who care about catastrophic and existential risk should care about autonomous weapons · 2020-11-23T23:33:17.867Z · EA · GW

While such systems could be used on civilian targets, they presumably would not be specialized as such — i.e. even if you can use an antitank weapon on people, that's not really what it's for an I expect most antitank weapons, if they're used, are used on tanks.

Comment by aaguirre on Why those who care about catastrophic and existential risk should care about autonomous weapons · 2020-11-23T23:29:47.510Z · EA · GW

That's probably true. The more important point, I think, is that this prohibition would be an potential/future, rather than real, loss to most current arms-makers.

Comment by aaguirre on Why those who care about catastrophic and existential risk should care about autonomous weapons · 2020-11-23T23:25:15.995Z · EA · GW

Fair enough. It would be really great to have better research on this incredibly important question.

Though given the level of uncertainty, it seems like launching an all-out (even if successful) first strike is at least (say) 50% likely to collapse your own civilization, and that alone should be enough.

Comment by aaguirre on Why those who care about catastrophic and existential risk should care about autonomous weapons · 2020-11-23T23:23:22.838Z · EA · GW

Thanks for that fix!

Comment by aaguirre on Why those who care about catastrophic and existential risk should care about autonomous weapons · 2020-11-23T23:22:20.103Z · EA · GW

Thanks for your comments! I've put a few replies, here and elsewhere.

Apologies for writing unclearly here. I did not mean to imply that

each participant is better off unilaterally switching into cooperative mode, even if no one else does so?

Instead I agree that

the key problem is creating a mechanism by which that coordination/cooperation can arise and be stable.

Comment by aaguirre on Why those who care about catastrophic and existential risk should care about autonomous weapons · 2020-11-17T17:12:48.871Z · EA · GW

I think I was on Brave browser, which may store less locally, so it's possible that was a contributor.

Comment by aaguirre on Why those who care about catastrophic and existential risk should care about autonomous weapons · 2020-11-16T16:42:21.556Z · EA · GW

No that was just a super rough estimate: world GPD of ~100 Tn, so 1 decade's worth is ~1 Qd, and I'm guessing a global nuclear war would wipe out a significant fraction of that.

My intuition has been that at least in the medium term unless AWs are self-replicating they'd cause GCR risk primarily through escalation to nuclear war; but if there are other scenarios, that would be interesting to know (by PM if you're worried about info. hazards.)

Comment by aaguirre on Why those who care about catastrophic and existential risk should care about autonomous weapons · 2020-11-16T16:34:24.432Z · EA · GW

The problem is I was not logged in on that browser. It asked me to log in to post the comment, and after I did so the comment was gone.

Comment by aaguirre on Why those who care about catastrophic and existential risk should care about autonomous weapons · 2020-11-12T21:37:20.063Z · EA · GW

Indeed the survey by CSET linked above is somewhat frustrating in that it does not directly address autonomous weapons at all. The closest it comes is to talk about "US battlefield" and "global battlefield" but the example/specific applications surveyed are:

U.S. Battlefield -- As part of a larger initiative to assist U.S. combat efforts, a DOD contract provides funding for a project to apply machine learning capabilities to enhance soldier effectiveness in the battlefield through the use of augmented reality headsets. Your company has relevant expertise and considers putting in a bid for the contract.

Global Battlefield -- As part of a larger initiative with U.S. allies to enhance global security, a DOD contract provides funding for a project to apply machine learning capabilities to enhance soldier effectiveness in the battlefield through the use of augmented reality headsets. Your company has relevant expertise and considers putting in a bid for the contract.

So there was a missed opportunity to better disambiguate things that many AI researchers are very concerned about (including lethal autonomous weapons) from those that very few are (e.g. taking money from the DoD to work on research with humanitarian goals). The survey captures some of this diversity but by avoiding the issues that many find most problematic only tells part of the story.

It's also worth noting that the response rate to the survey was extremely low, so there is a danger of some serious response bias systematics.

Comment by aaguirre on Why those who care about catastrophic and existential risk should care about autonomous weapons · 2020-11-12T15:16:33.953Z · EA · GW

Thanks for pointing these out. Very frustratingly, I just wrote out a lengthy response (to the first of the linked posts) that this platform lost when I tried to post it. I won't try to reconstruct that but will just note for now that the conclusions and emphases are quite different, probably most in terms of:

  • Our greater emphasis on the WMD angle and qualitatively different dynamics in future AWs
  • Our greater emphasis on potential escalation into great-powers wars
  • While agreeing that international agreement (rather than unilateral eschewing) is the goal, we believe that stigmatization is a necessary precursor to such an agreement.