Jump to Content Jump to Main Navigation
The Impact of Emerging Technologies on the Law of Armed Conflict edited by Alcala, Ronald TP; Jensen, Eric Talbot (11th October 2019)

Part One Compliance and Accountability, 1 Regulating New Weapons Technology

Rebecca Crootof

From: The Impact of Emerging Technologies on the Law of Armed Conflict

Edited By: MAJ Ronald T.P. Alcala, Eric Talbot Jensen

From: Oxford Public International Law (http://opil.ouplaw.com). (c) Oxford University Press, 2015. All Rights Reserved.date: 30 March 2020

Subject(s):
Armed conflict — Weapons control

(p. 3) Regulating New Weapons Technology

I.  Introduction

When confronted with a new weapons technology, international law scholars, military lawyers, and civil society activists regularly raise two questions: Are new regulations needed?2 And are they needed now?3

Answering these in the affirmative will lead to a Matryoshka doll-like number of additional questions to determine what form the new regulations should take.4 Is the technology developing slowly or rapidly? How dangerous is it, and how reversible are the potential harms associated with its use? Are the social, political, economic, and other impacts of the technology well understood? How easy is it to create the technology? To distribute it? To field it? Do we want to regulate certain (p. 4) kinds of uses or consequences of usage? Should the regulations be grounded in written law, custom, or “softer” guidelines, such as codes of conduct or best standards?5 Should new regulations be more tech-specific or more tech-neutral? How flexible should the regulations be? How expansive? How limited? Is the technology even amenable to regulation?6 And, by extension, if it is difficult to regulate the technology directly, might indirect regulations—such as market incentives, social norms, or architectural constraints—be more effective?7 Which individual or institution is best suited to creating these regulations? To interpreting them? To enforcing them? Is an entirely new institution needed?

Many engaging pieces address these questions by focusing on one exciting or concerning new technology—say, autonomous weapon systems, cyber operations, swarming drones, nanobots, genetically engineered viruses, or transformative artificial intelligence—and its current or likely impact on a law or legal regime. The aim of this chapter, however, is to step back and contemplate more generally whether and when new regulations are appropriate. Accordingly, Section II reviews the main categories of technology-fostered legal disruption, Section III tackles the question of whether a given technology will require new law, and Section IV weighs the respective benefits of precautionary bans, a wait-and-see approach, and proactive regulation.

II.  Technology-Fostered Legal Disruption

Law is not handed down from gods; it is a tool made by human beings to regulate human behavior. Accordingly, any significant social change—a technological development, a political revolution, an economic shift—that alters how humans interact with each other can spur legal evolution. Nonetheless, it is worthwhile to consider technological-fostered legal disruptions in isolation, as there are a number of reoccurring problems, principles, and methodologies that govern how law tends to respond to legally disruptive technologies.8

For the purposes of this chapter, I define “technology” as any combination of tools, skills, processes, and techniques by which human capability is (p. 5) extended.9 In expanding what human beings are capable of doing, new technologies sometimes permit entirely new forms of conduct or generate new negative externalities, which in turn create various kinds of legal uncertainty.

It is important to recognize at the outset, however, that most technological innovations cause little to no legal disruption. As Lyria Bennett Moses has observed,

Despite occasional statements that some new technology changes everything, legal problems stemming from technological change are relatively rare and quite specific. Most of the time, a law pre-dating technological change will apply in the new circumstances without any confusion. For example, traffic rules continue to apply to cars with electric windows, and no sane person would seek to challenge these laws as inapplicable or write an article calling for the law to be clarified. . . . Suggestions that law is unable to keep up in a race against technology or that law does not apply to a new technology . . . overstate the extent of the problem.10

The interesting cases make headlines and draw attention precisely because they are unusual—because they are causing legal confusion that deserves acknowledgment or requires action.

Additionally, although often misrepresented as a one-way ratchet, law influences the development of new technology just as much as technology influences the development of law. While this chapter presumes that a particular new technology might require “a systematic change to the law or legal institutes in order to reproduce, or if necessary displace, an existing balance of values,”11 it does not assume that any given change is predetermined. New technologies may enable similar human conduct in different contexts, but extant regulatory forces in each context will influence whether human beings engage in that conduct—and, by extension, will influence how much stress a new technology puts on a legal regime.12

(p. 6) With those qualifiers acknowledged, this section reviews four ways in which a new technology can be legally disruptive. It can (A) alter how rules are created or how law is used; (B) make more salient an ongoing but unresolved issue, gap, or contradiction in the existing rules; (C) introduce new uncertainty, usually regarding the application or scope of existing rules; and (D) upend foundational tenets of a legal regime, necessitating a reconceptualization of its aims and purposes.

A.  Significantly Changing How Law Is Created or Used

New technologies can significantly alter how regulations are created. At the international level, developments in communications and travel technologies have facilitated the dissemination of information and the amount of State interaction, which in turn has fostered the relatively swift creation of treaty law, customary international law, soft law,13 and other formal and subsidiary means of determining States’ international legal rights and obligations.14

New technologies have also expanded the number and kinds of entities involved in the process of creating new international law. International organizations, multinational industries, nonstate armed groups,15 nongovernmental organizations,16 and even individual “norm entrepreneurs”17 now have a seat at a table once reserved for States. This plethora of voices involved in the development of international law has expanded what it regulates and how it is enforced, resulting in rules that are less tailored to State interests.

Additionally, new technologies might alter how law is used. Charles Dunlap has credited new communications technology with enabling “lawfare,” which he defines as “the strategy of using—or misusing—law as a substitute for traditional (p. 7) military means to achieve an operational objective.”18 For example, one party to a conflict might inaccurately characterize an action by the other as a violation of the law of armed conflict and loudly protest it, with the aim of undermining domestic and international support for the other side. Dunlap notes that, “[i]n modern popular democracies, even a limited armed conflict requires a substantial base of public support,” and that this support “ ‘can erode or even reverse itself rapidly, no matter how worthy the political objective, if people believe that the war is being conducted in an unfair, inhumane, or iniquitous way.’ ”19 While acknowledging that parties to a conflict have always used the “perception or fact of wrongdoing by their opponents as a means of catalyzing support among their own people, and eroding it among their foes,” Dunlap argues that modern information technologies have “vastly increased the scope, velocity, and effectiveness of such efforts.”20

To say that new technology alters how regulations are created or used implies nothing about whether these changes are good or bad. Rather, this kind of legal disruption will have a mixture of benefits and drawbacks. The proliferation of international law helps States avoid or resolve disputes; it also facilitates legal fragmentation. Speeding up the development of international legal obligations has expanded both the activities they regulate and opportunities for conflict among them.21

B.  Highlighting Existing Legal Ambiguity

Sometimes, a new technology enables conduct that makes more salient a previously under-considered ambiguity or contradiction in existing law. The use of drones for targeted killing, for example, highlighted questions regarding the appropriate geographic and temporal limitations of armed conflicts, as well as the relationship between the law of armed conflict and international human rights law.

A variant on this is kind of legal disruption occurs when a difference in degree becomes a difference in kind, requiring clarification of a once-infrequent ambiguity. Consider the ongoing debate regarding the customary law of self-defense. Under the U.N. Charter, a State may only unilaterally use force against another (p. 8) when exercising its right to customary self-defense in response to an armed attack.22 But what is the scope of that customary right? It is generally accepted that it includes anticipatory self-defense, per the standard established in the 1837 Caroline case: States may use force in self-defense prior to an armed attack when the need to act is “instant, overwhelming, and leaving no choice of means, and no moment for deliberation.”23 Preemptive self-defense—which suggests that a State can take defensive action if it observes another party preparing for an armed attack, at least if “the potential victim State has good reasons to believe that the attack is likely, is near at hand, and, if it takes place, will result in significant harm”24—is more divisive, but this interpretation is still considered legitimate by many States and scholars.25 Finally, and most controversially, preventive self-defense would permit States to use force “to halt a serious future threat of an armed attack, without clarity about when or where that attack may emerge” or in response to “a state’s or group’s threatening behavior in the absence of credible evidence that the state or group has the capacity and intent to attack.”26

The respective legality of these different theories of when unilateral defensive actions are permissible has long been debated, but new technological developments further complicate the conversation. The speed of cyberattacks suggests that “requiring a state to wait until there is ‘no moment for deliberation’ before responding with force increasingly looks like a requirement that a state stand by and suffer an attack.”27 Nuclear weapons and other weapons of mass destruction raised the possibility of complete State annihilation, which bolster arguments for the legality of preventative self-defense.28 On the other hand, new surveillance technologies allow a potential victim State to gather more information about an opponent’s capacity and intent, which might “diminish the danger of waiting until the last possible moment to act.”29 These and other technological (p. 9) developments have spurred a flurry of scholarship proposing new frameworks and limiting factors on a State’s right to take unilateral defensive action in the attempt to address this long-standing but increasingly problematic ambiguity.

C.  Introducing New Uncertainty

As it will not necessarily be clear whether new, technologically enabled conduct is permitted, prohibited, or regulated, new technology often creates uncertainty regarding the proper application or scope of existing rules.30 This kind of legal disruption is essentially a variant of the “no vehicles in the park” problem31—because a given law was formulated without awareness of a new technology and the new conduct it permits, its applicability or scope must be clarified.

Questions regarding this kind of legal disruption arise constantly. When might a malicious cyber operation that causes no physical destruction permit a victim State to use force in self-defense?32 Is data targetable as a military object?33 Or consider the fact that modern weapons technology permits the targeting of only part of a building being used for military action, rather than the entire structure.34 Should the entire building still be considered a military objective, as has long been the case, or does this new capability imply that only the section being used for military action can be targeted, with the remainder transformed into a civilian object that needs to be accounted for in the proportionality analysis?35

(p. 10) D.  Undermining Foundational Assumptions

Lastly, new technology can undermine foundational assumptions or tenets of a legal regime, sometimes necessitating a reconceptualization of its aims and purposes.36 In other words, in addition to forcing a reconsideration of whether existing law can serve its regulatory function, new technologies sometimes require “a conceptual inquiry” as to whether the legal regime is accomplishing its purported goals.37 Rosa Brooks, for example, has called for a “radical reconceptualization of national security law and the international law of armed conflict” in light of the (often technologically enabled) erasures of traditional boundaries between “ ‘war’ and ‘nonwar,’ and between ‘national security’ and ‘domestic issues.’ ”38

In the law of armed conflict context, new technology and shifts in political, economic, or societal structures can foster dramatic shifts in the conduct of warfare or even a full “revolution in military affairs.”39 These revolutions necessitate revisions to the law of armed conflict, to preserve the balance between permitting parties to a conflict to achieve their military aims while minimizing human suffering.40 Autonomous weapon systems—weapon systems that, “based on conclusions derived from gathered information and preprogrammed constraints, [are] capable of independently selecting and engaging targets”41—are often described as heralding the next such revolution.42

While perhaps not a full revolution in military affairs, increasingly precise weapons stress axiomatic tenets of the law of armed conflict. There is a presumption that State parties to a conflict share similar rights and are subject to similar restrictions under customary international humanitarian law, which has resisted (p. 11) formalizing a principle that the rights and obligations of a party to a conflict might depend on their technological capabilities.43 But this concept is creeping in through purportedly tech-neutral principles, such as the customary obligation to take feasible precautions in an attack.44

As expressed in Additional Protocol I (AP I), State parties are obliged to “take all feasible precautions in the choice of means and methods of attack with a view to avoiding, and in any event to minimizing, incidental loss of civilian life, injury to civilians and damage to civilian objects.”45 It is generally accepted that “feasible” is “an inherently variable concept and the obligation is always context dependent.”46 In other words, “what is feasible will not only be contingent on the environment in which the attack is to be carried out but will also depend on a range of factors including time, terrain, weather, capabilities, available troops and resources, enemy activity and civilian considerations.”47 Accordingly, the “feasibility” criterion could easily be understood to vary from party to party, depending on their respective technological capabilities.

Furthermore, many new weapons technologies permit more precise targeting, which improves military efficiency and reduces collateral damage. These technologies often prompt the question of whether States have a duty to acquire or to use the most advanced technology, at least where it is the most discriminating or least lethal option.48 Often, the answer is some variation on the theme that international humanitarian law does not require the acquisition or use of any particular weapon—but that the application of general principles such as the feasible precautions requirement might necessitate the use of more precise weaponry in a certain scenario, either because the (p. 12) feasible precautions requirement applies or because the attack would otherwise be unlawful.49

More precise weapon technologies also risk undermining the humanitarian aims of the law of armed conflict.50 First, if States will be evaluated under stricter standards based on their differing technological capabilities, there is less of an incentive for States to develop or acquire more precise weaponry. Second, and somewhat counterintuitively, more precise weaponry might sometimes result in more civilian deaths, as they might permit attacks that would otherwise be prohibited. As Yoram Dinstein has noted, “When a sledgehammer is excluded by [the law of armed conflict] owing to the expectation of ‘excessive’ injury/damage to civilians/civilian objects compared to the anticipated military advantage . . . the availability of a scalpel may open the legal door for an attack at a lawful target.”51

E.  Overlapping Categories

None of these categories of legal disruption are mutually exclusive. Quite the contrary: the same technology often causes different kinds of legal disruptions in different legal regimes.52

Take 3D printers, which allow individuals and companies to create items through additive manufacturing.53 Basic versions intended for home use are already available, and they and their industrial cousins have already been used to create artificial bones, flexible casts, inexpensive invisible braces, Van Gogh reproductions, pedestrian bridges, and innumerable custom doo-dads. As 3D printers become better and cheaper, they are likely to cause multiple kinds of legal disruptions. They create uncertainty regarding the appropriate application of existing rules. Who is liable if a 3D printed gun explodes upon use, harming you? You, because you printed it? The manufacturer of the printer? The individual or team who wrote the file you used to print the weapon? The platform that hosted or shared that file? Or take a step back—what laws apply to the gun’s creation, ownership, and transfer? Do U.S. citizens have a Second Amendment right to 3D print their own guns? It is also entirely possible that 3D printers will undermine foundational assumptions in a legal regime. Any legal system that presumes it is possible to control a dangerous item by controlling the manufacturer or point of sale—say, an arms control treaty or certain intellectual property laws—will suddenly be rendered impotent.

(p. 13) Thus, while much of this chapter focuses on how new weapons create legal disruptions in international humanitarian law, it is important to keep in mind that a weapon that creates one kind of legal disruption in one area of international humanitarian law might cause a very different kind of disruption in others—or in international human rights law, the law of the sea, space law, or another legal regime.

Again, most new technologies are not legally disruptive. Parking restrictions will apply equally to human-driven and self-parking cars; prohibitions on guns in schools will govern both industrial-made and 3D-printed guns. There are, however, a few new technologies that permit new forms of conduct or generate new negative externalities, thereby changing how law is created or used, highlighting existing contradictions or introducing new uncertainty in existing rules, or undermining fundamental assumptions of legal regimes. These legal disruptions require some form of resolution, either through an authoritative new interpretation of existing rules, a revision of existing law, or the creation of entirely new regulations.

III.  Are New Regulations Needed?

Whether new regulations are needed to address the shifts caused by a new technology will depend in part on which kind of legal disruption it generates.54 New uncertainty in applying extant rules can usually be addressed through new interpretations; exposed contradictions and altered assumptions will often require explicit revisions or the creation of entirely new law.55

Granted, these are all idealized responses. In reality, of course, States interested in promoting a certain rule will default to advancing a new interpretation, which is far cheaper, quicker, and more reliable than attempting to negotiate or amend a treaty, establish the existence of new customary international law, or seek an Advisory Opinion from the International Court of Justice. As a result, issues that (p. 14) would be best solved by an explicit revision or the creation of entirely new law will often first be addressed with new interpretations.

A.  New Interpretations

Most of the time, existing law can be applied to new technologies and the conduct they enable without much confusion;56 other times, a new interpretation is needed to clarify whether and how the existing law applies to a new situation. Recall the “no vehicles in the park” problem. A prohibition on “vehicles” could easily be applied to electric cars, regardless of whether the rule was created before electric cars were popularly available. It is less clear if the prohibition would apply to an ultralight single-seat aircraft, requiring an interpreter to consider the language and purpose of the rule.57 On one hand, the ultralight does not present the same pedestrian threat associated with a land-based vehicle and so should not be banned; on the other, it might create excessive noise pollution and therefore should be barred.

The success of a new interpretation will depend on the authority of the interpreter, the specificity of the existing law, and the appropriateness of the analogy employed in the interpretation.

1.  The Importance of the Interpreter

Ideally, clarifications of how existing law applies to new, tech-enabled conduct would be made by an authoritative interpreter. Most domestic legal regimes formally identify individuals or entities who play this role. At the international level, however, there are many would-be interpreters with varying claims to legitimacy. States necessarily construe the law in taking and justifying actions; international tribunals elucidate the law in deciding cases;58 international organizations pronounce on their understanding of what the law is;59 scholars and members of civil society make various proposals about what the law is or should be.60 In this legal environment, the legitimacy of a new interpretation will usually depend on whether the relevant “audience”—a similar mixture of States, international organizations, civil society, and other interested parties—accepts it.61

(p. 15) 2.  The Tech-Neutral/Tech-Specific Spectrum

The extent to which existing law can be stretched will often depend on how tech-specific or tech-neutral it is—the more the law regulates a specific technology or a problem relevant to a moment in time, the less likely it is for expansive interpretations to pass the laugh test.

Relatively tech-neutral laws are often celebrated as “future-proof,”62 and they certainly have many benefits: relatively tech-neutral rules are less likely to create arbitrary distinctions and to be under-inclusive. For example, despite once-unconceivable technological developments, customary international humanitarian law remains relevant: “[t]he requirement that an attack discriminate between lawful targets and unlawful targets applies equally to swords and laser beams.”63

However, tech-neutral rules risk overinclusiveness and dangerous ambiguity: “[w]hile law of war principles have proved enduring and flexible guides to the lawfulness of weapons, their ambiguity and abstract content [has] greatly limited their regulatory effect, as well as their predictive value for advances in weapons law.”64 The proposed “meaningful human control” standard for autonomous weapon systems faces this problem, as there is currently no consensus as to (p. 16) what it would require.65 Indeed, the phrase’s ambiguity might even be understood in a way that conflicts with and undermines older, established targeting norms and ultimately causes more civilian death.66

Sometimes, relatively tech-specific rules will be preferable to ensure that a particular technology is used (or not used) in a particular way. There are numerous relatively tech-specific rules governing weapons. Some are found in customary international law, such as the prohibition on the use of poison.67 Most, however, are expressed in treaties, which might prohibit the development, use, or transfer of specific weapons.68 These tech-specific treaties have enjoyed varying degrees of success. Some—such as the ban on “laser weapons specifically designed, as their sole combat function or as one of their combat functions, to cause permanent blindness to unenhanced vision”—are perfectly successful, if success is measured in a lack of recorded violations. Others—such as the prohibition on the use of chemical weapons—are generally successful, if success is measured in general ratification,69 outcasting of violators,70 and elevation to generally applicable customary international law.71 Still others have failed miserably—the prohibition on the use of crossbows was stillborn,72 the ban on aerial bombardment did not survive the invention of the airplane,73 and restrictions on the use of submarines have proven largely unenforceable.74

(p. 17) However, the more tech-specific the rule, the more difficult it will be to justify applying it to a new and different technology. Indeed, William Boothby argues that it is fundamentally illegitimate to stretch weapon-specific prohibitions to subsequently developed weaponry, given that States only create and join weapon bans after “careful, even painstaking, scrutiny” of the definitional text to ensure that “the classes or descriptions of weapon to which the negotiated provisions will apply are clearly and unambiguously defined.”75 Well-meaning attempts to extend a prohibition on one class of weapons to another might actually undermine the power of weapon bans regimes generally: if such treaties are viewed as overly flexible, States wary of incurring unexpected and unwanted future legal obligations will be more reluctant to join them.

3.  The Role of Law-by-Analogy

New interpretations of existing regulations are often grounded in analogical reasoning. Analogical reasoning is a critical legal tool, with a number of attendant benefits: analogies help make new kinds of technology accessible, allow for the application of existing law, and can help identify risks or solutions. The combination of relatively tech-neutral rules and analogical reasoning minimizes the need for explicit revisions or entirely new law.76

However, there are a number of drawbacks to analogical reasoning. First, there are always multiple analogies for a new technology, and it is important to recognize that different analogies advance different regulatory narratives.77 Consider the various analogies for the pervasive global communication technology called the internet. A “World Wide Web” suggests an organically created common structure of linked individual nodes, which is presumably beyond regulation. The “Information Superhighway” emphasizes the import of speed and commerce and implies a nationally funded infrastructure subject to federal regulation. Meanwhile, “cyberspace” could be understood as a completely new and separate frontier, or it could be viewed as yet one more kind of jurisdiction subject to property rules and State control.78

(p. 18) Second, analogies are often misleading, in that they fail to capture a critical characteristic or imply the existence of a trait that doesn’t exist. “Cyberspace” can be conceived of as a global commons, akin to outer space or international waters—but this ignores how much of the infrastructure upon which cyberspace depends is owned or controlled by private actors and governments. Meanwhile, when viewed as another kind of territory, “cyberspace” suggests the possibility of boundaries, fences, “Great Firewalls,” and other kinds of delimitation that misrepresent how data actually flows.79

Finally, analogies limit our ability to understand the possibilities and limitations of new technology. I have discussed this problem previously in the context of autonomous weapon systems:

Analogies are also inherently constraining, in that they restrict our ability to think imaginatively about a new technology. Consider the use of the term “driverless cars” to describe autonomous vehicles. The inherent analogy normalizes something new and dangerous, but it also restricts our understanding and imagination. There is no reason to think autonomous vehicles will look or operate anything like existing cars, just as early cars did not look or operate like horseless carriages. An autonomous vehicle need not have a steering wheel or other means of human interaction with the system. And conceiving of autonomous vehicles as driverless cars locks one into a host of existing assumptions, instead of allowing for more imaginative conceptions of what the technology might permit. For example, rather than being individually owned and operated property, autonomous vehicles could operate as connected nodes on a “smart highway” or as a leasable service. Similarly, thinking of autonomous weapon systems as a single, independent, embodied entity—be it a weapon, combatant, child, or animal—prevents us from anticipating what other forms they might take.80

Most kinds of uncertainty raised by tech-enabled conduct can be resolved by analogical reasoning, ideally by an authoritative interpreter. In some cases, however, it will not be possible to credibly interpret existing law to apply to a new scenario. (This is sometimes referred to as the law “running out.”) In such cases, it will be necessary to explicitly revise existing rules or create entirely new ones.81

(p. 19) B.  Explicit Revisions

It can be relatively easy—at least with the benefit of hindsight—to identify situations when the law needs to be explicitly amended to address new, technology-enabled conduct.82 For example, many new weapons technologies minimize the number of on-the-ground troops needed in an armed conflict. In minimizing harm to soldiers, these new technologies also reduce one of the few remaining incentives for the U.S. legislature to check presidential warmongering—popular outrage over American deaths—thereby contributing to the executive’s usurpation of the U.S. constitutional war power.83 Furthermore, the U.S. War Powers Resolution, which was intended to force a legislative review of executive uses of force, relies on “armed forces” and “hostilities” to trigger a reporting requirement.84 For the Resolution to accomplish its original goal, Congress would need to amend the statute to address technological developments that allow presidents to evade the reporting requirement by using drones, cyber operations, and other new technologies to engage in activities that do not require troops and therefore fall short of the “hostilities” threshold.85

However, in some situations, a revision intended to stretch existing law to cover new conduct might actually undermine its aim. For example, autonomous weapon systems have the potential to commit serious violations of international humanitarian law without any individual acting intentionally or recklessly.86 As intentionality or recklessness is the mental state required for a war crimes prosecution, absent such willful action, no one can be held criminally liable under existing law.

Some have suggested that we address this problem by stretching the concept of command responsibility to apply to the actions of autonomous weapon systems.87 “Command responsibility” or “superior responsibility” suggests that “a superior may be liable if she exercises effective control over a subordinate, knows of or has (p. 20) reason to know of the subordinate’s actual or intended criminal acts, and fails to take necessary and reasonable measures to prevent or punish them.88

But the concept of command responsibility “grew from a desire to address a particular kind of guilt: the failure to act to prevent a war crime or the failure to deter others from acting similarly by punishing those who do commit war crimes.”89 Because it was never intended to create a fully independent source of individual criminal liability—it is premised on the assumption that a subordinate intends to or has committed a war crime, for which that individual can be held directly criminally liable—command responsibility cannot be easily applied to the acts of an autonomous weapon system. To do so, the mental element required for a war crime would need to be expanded to include negligence. While not completely unprecedented,90 holding individuals criminally liable for negligence in armed conflict risks overcriminalization, which in turn would weaken international criminal law’s compliance pull and moral legitimacy.91 This cure may be worse for the legal regime than the original ailment.

When existing law cannot be interpreted or modified to sufficiently address the problems or negative externalities created by the new, technologically enabled conduct, new law is needed.92

C.  New Law

New technology can sometimes permit entirely new forms of conduct or generate new negative externalities that cannot be addressed through applying or revising the existing law, primarily because the existing rules don’t anticipate the possibility of the new conduct. While students in a classroom could always pass notes or daydream, smartphones and laptops introduce the possibility of browsing social media sites, shopping, and playing interactive games in the middle of class. I ban smartphones in my classroom, because I believe the negative externalities associated with their presence far outweigh their potential benefits. (And while I don’t prohibit the use of laptops, I do remind students that human beings generally retain information far better when we are actively taking notes than when transcribing a conversation—and that we retain almost nothing when we are simultaneously shopping on Amazon.)

Answering the question of whether new law is needed will depend on how far the existing law can be stretched, on whether the new technology permits new problematic conduct or externalities that should be regulated, and on whether (p. 21) foundational assumptions of a legal regime are being altered to the extent that clarifications or readjustments are necessary. Some of these questions can only be answered in hindsight; others can be anticipated. But in trying to determine whether new regulations are needed, one must also consider the question of whether those new regulations are needed now.

IV.  Are New Regulations Needed Now?

The question of whether new regulations are needed is necessarily accompanied by the question of when new regulations are needed. From an ex ante perspective, answering this question will often depend on what risks can be foreseen and how reversible potential harms will be. When the stakes are low, a wait-and-see approach avoids devoting resources to creating law that might be inapposite or quickly rendered obsolete. When the stakes are high, the precautionary principle favors a preemptive ban; when a ban is unlikely to be successful or would result in other problems, other forms of proactive regulation may be the ideal means of directing technological development and use.

A.  The Wait-and-See Approach

At one end of the spectrum is the “wait-and-see” approach: the idea that the regulation of new technologies should be postponed until specific issues arise, especially when the potential social and political impacts of that technology are not yet well-understood. This approach avoids devoting limited resources to solving problems that never manifest, as occurred with the weather modification and cloud seeding treaties,93 and it doesn’t risk limiting potentially beneficial innovations through overbroad rules.94 It is a common response to new weaponry, likely because States want to understand what capabilities a new weapon permits before voluntarily limiting which ones they can employ.95

Nor does the wait-and-see approach imply a law-free zone: most of the new human conduct enabled by new technologies will be governed by existing law, applied through analogical reasoning. If and when problems arise that cannot be addressed by current law, there may be a need for explicit revisions or entirely new regulations.

However, the laissez-faire wait-and-see approach does have two main drawbacks. First, somewhat counterintuitively, it might foster hypersensitive rulemaking—a law formed in response to an unusual accident, rather than to how (p. 22) a given technology is regularly used—which tends to result in overbroad laws. Second, “it foregoes a precious opportunity to use law responsibly to channel the development” of a new technology.96 As Jonathan Zittrain has observed, “The procrastination principle rests on the assumption that most problems . . . can be solved later or by others.”97 This will not always be the case.

B.  The Precautionary Principle and Legal Bans

In contrast to the “wait-and-see” approach, the precautionary principle suggests that speed bumps and roadblocks are beneficial, and sometimes even necessary to prevent irreversible harm. With certain technologies, the wait-and-see approach might result in difficult-to-remedy damage, such as overfishing, pollution, or the widespread use of a problematic new kind of weapon.98

The precautionary principle takes different forms. Weaker versions put a thumb on a scale in favor of caution and careful study when considering the use of technologies that carry a risk of serious, irreversible, or catastrophic consequences. A moderate version suggests that, where a new, tech-enabled activity might carry such a risk, the burden of proof ought to shift to the party that wants to undertake that activity.99 In its strongest form, the precautionary principle suggests that, if there is a plausible risk of significant harm, there is a strong presumption in favor of enacting regulatory controls, possibly even a ban on a technology or use of that technology.100 In other words, it counsels for prohibiting any risky activity until it can be proven safe.

Of course, a decision to ban a new technology is not without costs: often, prohibiting the use of a new technology also means forgoing its potential benefits. In other words, a decision to ban a new technology is a decision to maintain the status quo and its associated harms.101 For example, Dunlap has argued that bans can “incentivize warfighters to resort to ‘legal’ but more destructive weaponry,” as has arguably occurred with the banning of non-lethal chemical weapons.102

(p. 23) Drawbacks acknowledged, certain risks or problematic conduct associated with a new weapons technology may be so great that the precautionary principle weighs in favor of a ban.103 But even when there is good reason to ban a given technology, it may not be possible to do so. Along with a host of historical, political, and social factors, a technology’s architectural characteristics will render it more or less susceptible to regulation.104 Easily replicated weapons will be more difficult to ban than weapons that depend on difficult-to-acquire materials or expertise; easily weaponized dual-use technologies will be more difficult to control than technologies with no civilian purpose.

When bans are unlikely to be successful, proponents of a precautionary approach would do well to consider two alternative options. First, moratoriums on the use of a given technology create breathing room to better understand the benefits and risks associated with a new technology before deploying, banning, or otherwise regulating it.105 Alternatively, it may be possible through proactive regulations to limit the use or transfer of weapons that cannot be completely banned.

C.  Proactive Regulations

Proactive regulation attempts to walk the line between the passive wait-and-see approach and a complete ban by selectively addressing specific risks associated with a new technology. Tailored regulations can help avert some of the largest (predictable) dangers associated with a new technology, while still leaving room for beneficial innovation.

Eric Talbot Jensen is a vocal proponent of proactive regulation in the international humanitarian law context.106 He has argued that we are currently “at a point in history where we can see into the future of armed conflict and discern some obvious points where future technologies and developments are going to (p. 24) stress the current [law of armed conflict].”107 Rather than awaiting the reactive process of international humanitarian lawmaking, Jensen argues that we need to identify these tensions and attempt to correct them in advance: “[J]ust as military practitioners work steadily to predict new threats and defend against them, [law of armed conflict] practitioners need to focus on the future of armed conflict and attempt to be proactive in evolving the law to meet future needs.”108

For example, under the current wait-and-see approach to the developing law of cyberspace, States are not currently being held accountable for most of their harmful cyber operations. While often invasive and injurious, cyber operations rarely meet the armed attack threshold justifying recourse to defensive uses of force, and categories created to describe other unlawful State acts in physical space—such as violations of sovereignty and intervention—do not translate well into cyberspace, where most cyber operations are secret and difficult to attribute. Nor are States extending existing definitions of unlawful acts permitting countermeasures to malicious cyber operations (likely to avoid creating precedent restricting their own activities). Rather than allowing these injurious cyber operations to continue unchecked, I propose we term them “international cybertorts,” defined as an act that employs, infects, or undermines the internet, a computer system, or a network and thereby causes significant transboundary harm, and hold States liable for associated costs.109 Not only would this create a non-escalatory deterrent, it also would preserve a bounded gray zone for State experimentation.

Or consider autonomous weapon systems. They offer a number of potential benefits and raise a number of concerns, sparking an ongoing debate as to whether they should be banned.110 Ban critics tend to take a wait-and-see approach, arguing that existing legal regulations are sufficient; meanwhile, ban advocates think that a blanket prohibition is necessary to avoid the potential harms associated with increasingly autonomous weapon systems. One point of contention is the possibility, mentioned previously, that autonomous weapon systems may malfunction and commit an action that appears to be a serious violation of international humanitarian law without anyone being able to be held accountable under existing law. Instead of relying on this legal lacuna to justify a ban on all (p. 25) such weapons, however, or blithely assuming the law will catch up, I argue that we must proactively adapt existing law to hold States strictly liable for the significant harms caused by their autonomous weapon systems.111

One critique of the proactive regulation approach is that States are unlikely to voluntarily forgo experimenting with a potentially beneficial technology because of a hypothetical, possibly unlikely negative consequence. But while this lessens the likelihood of a ban being negotiated and effective, it need not mean that regulation is off the table. States may be willing to regulate what they are not willing to relinquish. And certainly there is a history of States tying their hands with regards to both known and unknown technologies: there have been a few preemptive weapons bans,112 and tech-neutral customary prohibitions on weapons that are inherently indiscriminate or that cause superfluous injury or suffering have encouraged research and development of more precise weapons.

As is nearly always the case when crafting technological regulations, there is no one-size-fits-all answer. Instead, because of their differing benefits and drawbacks, which approach is best will depend on the new technology at issue, the conduct it enables, and the risks it raises in the context of the overarching legal regime.

V.  Conclusion

A common refrain is that the law cannot keep up with new technology. This is false on its face: instead, “most law-of-war rules apply most of the time to most new technologies.”113 Law is constantly shaping what new technology is developed and how it is used by incentivizing certain actions and discouraging others. The law of armed conflict—like other legal regimes—directs the development of new technology and evolves in response to new technological developments. Snapshots of a moment in time that focus on a particularly legally disruptive technology do not capture the full range of the iterative relationship between law and technology.

Of course, as with any development that alters how human beings interact with each other, some technological innovation is legally disruptive—either because it changes how law is created or used, because it highlights existing contradictions or introduces new uncertainty, or because it undermines a fundamental assumption of a legal regime. To address these disruptions, the law must evolve—either through adaptive interpretations, explicit revisions, or the creation of new rules.(p. 26)

Footnotes:

1.  Executive Director, Information Society Project; Research Scholar and Lecturer in Law, Yale Law School. I am grateful to BJ Ard and Jack Balkin for years of ongoing conversations about the interactions between law and technology, which inform this entire piece; to Eric Talbot Jensen for the impetus to get these ideas down on paper; and to BJ Ard for his characteristically thoughtful and precise textual edits. Thanks also to Douglas Bernstein, Ignacio Cofone, Hannah Bloch-Wehba, and participants in the ISP Fellows Writing Workshop and the Lieber Institute for Law and Land Warfare Workshop for helpful suggestions and clarifying questions.

2.  See Kristen E. Eichensehr, Cyberwar and International Law Step Zero, 50 Tex. Int’l L.J. 357, 358 (2015) (terming this the “ ‘international law step zero’ question” in the context of evaluating whether existing international law applies to new weapons).

3.  See Colin Picker, A View from 40,000 Feet: International Law and the Invisible Hand of Technology, 23 Cardozo L. Rev. 149, 184–87, 203–05 (2001) (discussing timing issues and outlining additional questions for policymakers to consider when crafting international regulations for new technologies).

4.  Often, the same questions will reoccur at different “levels”: one might first ask whether the use of a given technology poses a serious problem when deciding whether to regulate it, and again when deciding when to regulate it, and again when deciding how to regulate it. Cf. Jack Balkin, The Crystalline Structure of Legal Thought, 39 Rutgers L. Rev. 1 (1986).

5.  Cf. Lawrence R. Helfer & Ingrid B. Wuerth, Customary International Law: An Instrument Choice Perspective, 37 Mich. J. Int’l L. 563 (2016) (discussing when States might prefer treaty law, soft law, and customary international law, based on their respective characteristics).

6.  Sean Watts, Regulation-Tolerant Weapons, Regulation-Resistant Weapons and the Law of War, 91 Int’l L. Stud. 540 (2015).

7.  Cf. Lawrence Lessig, Code Version 2.0 120–37, 340–45 (2006).

8.  Cf. Lyria Bennett Moses, Why Have a Theory of Law and Technological Change?, 8 Minn. J.L. Sci. & Tech. 589, 598 (2007). (“If . . . law and technology can be thought of as a series of related problems that law frequently confronts in situations where technology changes, then the focus on law and technology as an area of study is justified. Recognizing the similarities between problems arising in different technological contexts creates the possibility of learning from the consequences of past legal responses to technological change.”).

Furthermore, regardless of whether the problems raised by new technologies are actually different in kind, judges and other legal interpreters often treat them as such. Id. at 600 (“Judges usually feel more comfortable updating the law in light of technological change as compared to social change, perhaps because it is more easily perceived as objective.”).

9.  Donald Schön, Technology and Change 1 (1967). Under this definition, the following would not qualify as “technologies,” though they have been described as such elsewhere: a change in the relative costs of input; applied sciences; and law itself. Bennett Moses, supra note 8, at 591–92.

10.  Bennett Moses, supra note 8, at 596.

11.  Ryan Calo, Robotics and the Lessons of Cyberlaw, 103 Calif. L. Rev. 513, 551–52 (2015).

12.  Meg Leta Jones, Does Technology Drive Law?: The Dilemma of Technological Exceptionalism in Cyberlaw, 2018 J. L. TEch. & Pol’y 249 (highlighting how technologies that were considered legally disruptive in the United States were not similarly disruptive in other States’ domestic legal systems); Margot Kaminski, Legal Disruption (on file with author) (discussing how a particular technology causes different kinds of legal disruption in different areas of U.S. law).

13.  Andrew T. Guzman & Timothy L. Meyer, International Soft Law, 2 J. Legal Analysis 171, 174 (2010) (defining “soft law” as “nonbinding rules or instruments that interpret or inform our understanding of binding legal rules or represent promises that in turn create expectations about future conduct”).

14.  Rebecca Crootof, Jurisprudential Space Junk: Treaties and New Technologies, in Resolving Conflicts in the Law: Essays in Honour of Lea Brilmayer 106 (Chiara Giorgetti & Natalie Klein eds., 2019).

15.  See Anthea Roberts & Sandesh Sivakumaran, Lawmaking by Nonstate Actors: Engaging Armed Groups in the Creation of International Humanitarian Law, 37 Yale J. Int’l L. 107 (2011).

16.  See John King Gamble & Charlotte Ku, International Law—New Actors and New Technologies: Center Stage for NGOs?, 3 Law & Pol’y Int’l Bus. 221, 249–51 (2000) (comparing the minimal NGO involvement in the drafting of UNCLOS with the critical role they played in the Mine Ban Treaty and attributing the difference to modern communications technologies).

17.  See Harold Hongju Koh, Address, The 1998 Frankel Lecture: Bringing International Law Home, 35 Hous. L. Rev. 623, 656–63 (1998) (discussing the role of individual, nonstate actors with regard to the Mine Ban Treaty).

18.  Charles J. Dunlap, Jr., Lawfare Today: A Perspective, 2008 Yale J. Int’l Aff. 146, 146; see also Charles J. Dunlap, Jr., Law and Military Interventions: Preserving Humanitarian Values in 21st Century Conflicts (Carr Center for Human Rights, John F. Kennedy School of Government, Harvard University, working paper, 2001).

19.  Dunlap, Lawfare Today, supra note 18, at 147–48 (quoting W. Michael Reisman & Chris T. Antoniou, The Laws of War xxiv (1994)).

20.  Id. at 148.

21.  Cf. Int’l Law Comm’n, Fragmentation of International Law: Difficulties Arising from the Diversification and Expansion of International Law, U.N. Doc. A/CN.4/L.682 (Apr. 13, 2006).

22.  U.N. Charter art. 51 (“Nothing in the present Charter shall impair the inherent right of individual or collective self-defence if an armed attack occurs against a Member of the United Nations, until the Security Council has taken measures necessary to maintain international peace and security.”).

23.  Letter from Daniel Webster, U.S. Secretary of State, to Lord Ashburton, British Plenipotentiary (6 Aug. 1842), in 2 John Bassett Moore, A Digest of International Law § 217, at 412 (1906).

24.  Ashley Deeks, Taming the Doctrine of Preemption, in The Oxford Handbook of the Use of Force in International Law 661, 662–63 (Marc Weller ed., 2015).

25.  The most commonly cited example of a legitimate preemptive strike occurred during the Six-Day War, when Israel launched a surprise attack against troops amassing on its borders.

26.  Deeks, supra note 24, at 663.

27.  Id.

28.  Id. at 669–72; cf. Legality of Threat or Use of Nuclear Weapons, Advisory Opinion, 1996 I.C.J. Rep. 226, 266 (July 8).

29.  Deeks, supra note 24, at 677.

30.  Cf. Bennett Moses, supra note 8, at 595 (noting that, because “[e]xisting rules were not formulated with new technologies in mind,” “some rules in their current form inappropriately include or exclude new forms of conduct”).

31.  H.L.A. Hart, Positivism and the Separation of Law and Morals, 71 Harv. L. Rev. 593, 607 (1958).

32.  Oona A. Hathaway, Rebecca Crootof, Philip Levitz, Haley Nix, Aileen Nowlan, William Perdue & Julia Spiegel, The Law of Cyber-Attack, 100 Calif. L. Rev. 817 (2012).

33.  Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations 434, 437 (Rule 99) (Michael N. Schmitt ed., 2017) [hereinafter Tallinn Manual 2.0] (discussing contrasting views regarding whether data can be a lawful target of an attack); Int’l L. Ass’n Study Group on the Conduct of Hostilities in the 21st Century, The Conduct of Hostilities and International Humanitarian Law: Challenges of 21st Century Warfare, 93 Int’l L. Stud. 338–40 (2017) [hereinafter Challenges of 21st Century Warfare] (similar); see also Rebecca Crootof, International Cybertorts: Expanding State Accountability in Cyberspace, 103 Cornell L. Rev. 565, 596, n.134 (2018) (suggesting that States should be held liable for their data destruction cyber operations).

34.  Challenges of 21st Century Warfare, supra note 33, at 322, 334.

35.  The Study Group on the Conduct of Hostilities in the 21st Century disagreed on the proper answer to this question. Id. at 335.

36.  Bennett Moses, supra note 8, at 595.

37.  Duncan Hollis, Setting the Stage: Autonomous Legal Reasoning in International Humanitarian Law, 30 Temple Int’l & Comp. L.J. 1, 2 (2016).

38.  Rosa Ehrenreich Brooks, War Everywhere: Rights, National Security Law, and the Law of Armed Conflict in the Age of Terror, 153 U. Pa. L. Rev. 675, 677, 747 (2004).

39.  See, e.g., Martin Van Creveld, Technology and War: From 2000BC to the Present (1991) (dividing military history into four eras: the “Age of Tools,” the “Age of the Machine,” the “Age of Systems,” and the “Age of Automation”). While there is general agreement that such revolutions occur, there is little consensus as to how many have happened or whether we are in one now.

40.  See, e.g., Yoram Dinstein, The Conduct of Hostilities Under the Law of International Armed Conflict 11 (2d ed. 2010); Michael N. Schmitt, Military Necessity and Humanity in International Humanitarian Law: Preserving the Delicate Balance, 50 Vir. J. Int’l L. 795 (2010).

41.  Rebecca Crootof, The Killer Robots Are Here: Legal and Policy Implications, 36 Cardozo L. Rev. 1837, 1842 (2015).

42.  See, e.g., An Open Letter to the United Nations Convention on Certain Conventional Weapons, Future of Life Institute (2017), https://futureoflife.org/autonomous-weapons-open-letter-2017; Maxim Worcester, Autonomous Warfare—A Revolution in Military Affairs (2015), https://www.files.ethz.ch/isn/190160/340_Worcester.pdf.

43.  For the purposes of exploring the impact of increasingly precise warfare, I am focusing solely on State parties to a conflict. However, it should be acknowledged that technological advances have now rendered some nonstate armed groups stronger military forces than their own host states, creating a related but distinct conceptual problem for international humanitarian law.

44.  Rule 15. Precautions in Attack, Int’l Comm. Red Cross, http://www.icrc.org/customary-ihl/eng/docs/v1_rul_rule15 (last visited Sept. 25, 2017).

45.  Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts (Protocol I) 57(a)(ii), adopted June 8, 1977, 1125 U.N.T.S. 3.

46.  Challenges of 21st Century Warfare, supra note 33, at 373.

47.  Id. (citing sources).

48.  See, e.g., Oren Gross, The New Way of War: Is There a Duty to Use Drones?, 67 Fl. L. Rev. 1 (2015) (considering this question in the context of drone warfare); Duncan Hollis, Re-thinking the Boundaries of Law in Cyberspace: A Duty to Hack?, in Cyberwar: Law and Ethics for Virtual Conflicts (J. Ohlin et al. eds., 2015) (same, with regards to cyber operations); Christopher B. Puckett, In This Era of “Smart Weapons,” Is a State Under an International Legal Obligation to Use Precision-Guided Technology in Armed Conflict?, 18 Emory Int’l L. Rev. 645 (2004) (same, with regard to precision-guided missiles).

49.  See, e.g., Challenges of 21st Century Warfare, supra note 33, at 383–84.

50.  Dinstein, supra note 40, at 11; Schmitt, supra note 40.

51.  Dinstein, supra note 40, at 169.

52.  Kaminski, supra note 12.

53.  Additive manufacturing is the process of creating a three-dimensional object by layering materials to create a desired shape.

54.  Kristen Eichensehr has noted that the question of whether new regulations for new weapons technology “seem[s] to occur more frequently in the international sphere,” and posited four reasons why this might occur. Eichensehr, supra note 2, at 368. First, the fact that much of international law is grounded in state consent “triggers a perennial examination of the applicability and scope of coverage of international law that provides a framework for and nudge toward consideration of the possibility that no international law exists to address a particular issue.” Id. at 369. Second, the relative lack of a check on state behavior might result in states feeling more able to revisit the question. Id. Third, the frequent development of new weapons technologies with new effects prompts debate regarding the applicability of existing law. Id. at 370–71. Fourth, several distinct parties—States, militaries, nongovernmental organizations, and commentators—have different incentives for regularly questioning whether the existing law is sufficient. Id. at 371–72.

55.  Technologies that change how law is created or employed operate on a slightly different plane: this kind of legal disruption is worth acknowledging, but unless a given kind of lawmaking is problematic, there is little need to take any action.

56.  Bennett Moses, supra note 8, at 595–97.

57.  Cf. Lon L. Fuller, Positivism and Fidelity to Law: A Reply to Professor Hart, 71 Harv. L. Rev. 630, 662–64 (1958).

58.  See, e.g., Legality of Threat or Use of Nuclear Weapons, Advisory Opinion, 1996 I.C.J. Rep. 226, 266 (July 8).

59.  See, e.g., Customary International Humanitarian Law Database, Int’l Comm. Red Cross, https://ihl-databases.icrc.org/customary-ihl/eng/docs/home (last visited Sept. 25, 2017).

60.  See, e.g., Tallinn Manual 2.0, supra note 33.

61.  Cf. J.M. Balkin & Sanford Levinson, Interpreting Law and Music: Performance Notes on “The Banjo Serenader” and “The Lying Crowd of Jews,” 20 Cardozo L. Rev. 1513, 1519–20 (1999) (discussing the role of the audience in determining whether an interpretation of a text is “authentic or faithful”).

62.  Cf. John B. Alexander, Future War: Non-lethal Weapons in Twenty-First-Century Warfare 198–99 (1999) (arguing that tech-specific weapons treaties are inherently flawed because they regulate a specific technology, rather than undesired behavior); Charles J. Dunlap, Jr., Guest Post: To Ban New Weapons or Regulate Their Use?, Just Security (Apr. 3, 2015; 12:24 PM), https://www.justsecurity.org/21766/guest-post-ban-weapons-regulate-use/ (emphasis in original) (quotation marks omitted) (same).

63.  Crootof, Jurisprudential Space Junk, supra note 14, at 16. Other relatively tech-neutral customary international humanitarian laws include the prohibitions against (1) the use of weapons that cause superfluous injury or suffering, see, e.g., Rule 70. Weapons of a Nature to Cause Superfluous Injury or Unnecessary Suffering, Int’l Comm. Red Cross, http://www.icrc.org/customary-ihl/eng/docs/v1_rul_rule70 (last visited Sept. 5, 2017); (2) weapons that are inherently indiscriminate, see, e.g., Rule 71. Weapons That Are by Nature Indiscriminate, Int’l Comm. Red Cross, https://www.icrc.org/customary-ihl/eng/docs/v1_rul_rule71 (last visited Sept. 5, 2017); and (3) possibly also weapons that are intended to cause serious damage to the natural environment, see, e.g., Rule 45. Causing Serious Damage to the Natural Environment, Int’l Comm. Red Cross, https://www.icrc.org/customary-ihl/eng/docs/v1_rul_rule45 (last visited Sept. 5, 2017) (discussing state practice supporting and limiting this claim).

States also have a customary obligation to conduct legal reviews of new weapon designs, to ensure that all fielded weapons can be used in compliance with international humanitarian law. See Kathleen Lewand et al., A Guide to the Legal Review of New Weapons, Means and Methods of Warfare: Measures to Implement Article 36 of Additional Protocol I of 1977, 88 Int’l Rev. Red Cross 931, 933 (2006) (“The faithful and responsible application of its international law obligations would require a State to ensure that the new weapons, means and methods of warfare it develops or acquires will not violate these obligations.”); see also Additional Protocol I, supra note 45, art. 36 (codifying this requirement).

64.  Watts, supra note 6, at 542.

65.  Rebecca Crootof, A Meaningful Floor for “Meaningful Human Control”, 30 Temple Int’l & Comp. L.J. 53 (2016).

66.  Id. at 61–62 (“Any definition of meaningful human control that would prioritize human control at the cost of increased risk to soldiers and civilians must be rejected outright.”).

67.  See Watts, supra note 6, at 562.

68.  See, e.g., Convention on the Prohibition of the Development, Production, Stockpiling and Use of Chemical Weapons and on Their Destruction art. 1, Jan. 13, 1993, S. Treaty Doc. No. 103-21, 1974 U.N.T.S. 45 (prohibiting the development, production, acquisition, stockpiling, retention, transfer, and use of chemical weapons).

69.  One hundred and ninety-two states are currently party to the Chemical Weapons Convention. Convention on the Prohibition of the Development, Production, Stockpiling and Use of Chemical Weapons and on Their Destruction, UN Treaty Collection, https://treaties.un.org/pages/ViewDetails.aspx?src=TREATY&mtdsg_no=XXVI-3&chapter=26&lang=en (last visited Sept. 13, 2017). Israel has signed but not yet ratified the Convention; Egypt, North Korea, and South Sudan have neither signed nor ratified. Id.

70.  Consider the overwhelming negative reaction to Syria’s August 2013 use of chemical weapons, despite the fact that Syria was not a State party to the Chemical Weapons Convention at that time.

71.  See, e.g., Rule 74. Chemical Weapons, Int’l Comm. Red Cross, http://www.icrc.org/customary-ihl/eng/docs/v1_cha_chapter24_rule74 (last visited Sept. 5, 2017).

72.  Crootof, Killer Robots, supra note 41, at 1904.

73.  Id. at 1904–06.

74.  Id. at 1906–07.

75.  William H. Boothby, Weapons and the Law of Armed Conflict 146–47 (2016).

76.  Cf. Eichensehr, supra note 2, at 372–75 (discussing why cyber operations can be mostly governed by existing international humanitarian law).

77.  Rebecca Crootof, Autonomous Weapon Systems and the Limits of Analogy, 9 Harv. Nat’l Sec. J. 51 (2018) (discussing the relative utility of weapons, combatants, child soldiers, and animal combatants as analogies for autonomous weapon systems).

78.  Compare John Perry Barlow, A Declaration of the Independence of Cyberspace (Feb. 8, 1996), available at https://www.eff.org/cyberspace-independence (“Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.”), with Jonathan H. Blavin & I. Glenn Cohen, Gore, Gibson, and Goldsmith: The Evolution of Internet Metaphors in Law and Commentary, 16 Harv. J.L. & Tech. 265, 280–85 (discussing various conceptions of the internet as a space subject to zoning, interference, trespass, and sovereignty).

79.  See Hollis, A Duty to Hack?, supra note 48 (discussing problems with applying boundary-based rules in cyberspaces).

80.  Crootof, Limits of Analogy, supra note 77, at 80 (citations omitted).

81.  See, e.g., Challenges of 21st Century Warfare, supra note 33, at 337 (noting that the unique characteristics of cyberspace “raise[] the question whether the application of IHL rules can adequately meet the specific humanitarian concerns of cyber warfare”); Eichensehr, supra note 2, at 375–79 (discussing how cyberspace may require some modifications to existing law); Hollis, A Duty to Hack?, supra note 48 (acknowledging the limits of using law-by-analogy in discussing cyber regulation).

82.  Of course, that doesn’t necessarily mean it will be easy to enact those revisions.

83.  See Rebecca Crootof, War, Responsibility, and Killer Robots, 40 N.C. J. Int’l L. 909 (2015).

84.  War Powers Resolution of 1973, Pub. L. No. 93-148, 87 Stat. 559 (codified as 50 U.S.C. §§ 1541-48 (2006)).

85.  Eric Talbot Jensen, Future War and the War Powers Resolution, 29 Emory Int’l L. Rev. 499 (2015). Granted, the definition of “hostilities” is hotly contested; at least thus far, however, Congress has not challenged executive arguments that various uses of force do not constitute “hostilities” for the purposes of the statute. Id. at 533.

86.  Rebecca Crootof, War Torts: Accountability for Autonomous Weapon Systems, 164 U. Pa. L. Rev. 1347 (2016)

87.  Numerous commentators have argued that this doctrine should be expanded to create individual accountability for the acts of autonomous weapon systems. See, e.g., Christof Heyns, Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, ¶ 81, U.N. Doc. A/HRC/23/47 (Apr. 9, 2013) (“[A]mendments to the rules regarding command responsibility may be needed to cover the use of [autonomous weapon systems].”).

88.  Id. at 1379 n.173 (citing sources).

89.  Crootof, War Torts, supra note 86, at 1378.

90.  Id. at 1382–83.

91.  Id. at 1384–85.

92.  For example, instead of revising command responsibility to encompass negligent actions, I propose holding States accountable for the injurious wrongs associated with employing autonomous weapon systems under a “war torts” regime. Id.

93.  See Picker, supra note 3, at 186–87.

94.  Jonathan Zittrain terms this the “procrastination principle” and argues that it is vital to preserving a system’s generativity. Jonathan Zittrain, The Future of the Internet and How to Stop It 31, 135, 245 (2008).

95.  Watts, supra note 6, at 612 (“At present, a ‘wait and see’ approach seems to prevail with respect to prospective or early regulation of novel military technology.”).

96.  Crootof, War Torts, supra note 86, at 1400; Eric Talbot Jensen, The Future of the Law of Armed Conflict: Ostriches, Butterflies, and Nanobots, 35 Mich. J. Int’l L. 253, 262 (2014) (“[R]elying solely on quick [state] reaction to technological developments ignores the vital signaling role that the [law of armed conflict] plays in the development of state practice.”).

97.  Jonathan Zittrain, The Future of the Internet—and How to Stop It 31 (2008).

98.  Picker, supra note 3, at 186.

99.  See Noah M. Sachs, Rescuing the Strong Precautionary Principle from Its Critics, 2011 U. Ill. L. Rev. 1285 (2011).

100.  See Cass R. Sunstein, The Paralyzing Principle, Regulation, Winter 2002–2003.

101.  Id.

102.  Charlie Dunlap, A Better Way to Protect Civilians and Combatants than Weapons Bans: Strict Adherence to the Core Principles of the Law of War, Lawfire (Dec. 3, 2015), https://sites.duke.edu/lawfire/2015/12/03/a-better-way-to-protect-civilians-and-combatants-than-weapons-bans-strict-adherence-to-the-core-principles-of-the-law-of-war-2/.

103.  Weapon bans tend to be most successful when the weapon’s negative impacts outweigh its military utility. Crootof, Killer Robots, supra note 41.

104.  Sean Watts and I have independently conducted historic analyses to identify characteristics common to successful weapons regulations. Id.; Watts, supra note 6.

105.  See, e.g., Heyns, supra note 87, at 21 (recommending that the Human Rights Council to call for national moratoria on the “testing, production, assembly, transfer, acquisition, deployment and use” of autonomous weapon systems); see also Declaration (IV, I), to Prohibit, for the Term of Five Years, the Launching of Projectiles and Explosives from Balloons, and Other Methods of a Similar Nature, July 29, 1899, 32 Stat. 1839, 1 Bevans 270 (prohibiting, for five years, “the launching of projectiles and explosives from balloons, or by other new methods of a similar nature”).

106.  See Jensen, Ostriches, Butterflies, and Nanobots, supra note 96; Eric Talbot Jenson, Future War, Future Law, 22 Minn. J. Int’l L. 282 (2013) (arguing that, although “many advancing technologies are still in the early stages of development and design, the time to act is now” and that “the international community needs to recognize the gaps in the current [law of armed conflict] and seek solutions in advance of the situation”).

107.  Id. at 257.

108.  Id. at 254. For example, the Study Group on the Conduct of Hostilities in the 21st Century was formed “to examine whether the [international humanitarian law] rules governing the conduct of hostilities are sufficient to regulate” new developments in armed conflict. Int’l L. Ass’n, The Conduct of Hostilities Under International Humanitarian Law – Challenges from 21st Century Warfare: Study Group Proposal (2011); see also Challenges of 21st Century Warfare, supra note 33.

109.  Crootof, International Cybertorts, supra note 33.

110.  For a thorough review of the various arguments employed, see Kenneth Anderson & Matthew C. Waxman, Debating Autonomous Weapon Systems, Their Ethics, and Their Regulation Under International Law, in The Oxford Handbook of Law, Regulation and Technology 1097 (Roger Brownsword, Eloise Scotford & Karen Yeung eds., 2017).

111.  Crootof, War Torts, supra note 86.

112.  These include the 1899 prohibition on the use of projectiles intended to disperse asphyxiating gases and the 1995 prohibition on the use of intentionally blinding laser weapons.

113.  Eichensehr, supra note 2, at 359.