(Credit: sommersby – stock.adobe.com)
No. 3
Combining Traditional and Progressive Jus ad Bellum Threat Evaluations in Response to Autonomous Weapon Systems
By Major Gregory L. Collins
As was true in previous armed conflicts, this new technology raises
profound questions–about who is targeted, and why; about civilian
casualties, and the risk of creating new enemies; about the legality
of such strikes under U.S. and international law; about accountability
and morality.1
A cloud of tension hovers over the Combatant Command operations center.2
Two adjacent States are quickly destabilizing after a recent natural
disaster. One State is an ally of the United States, while the other
State is a competitor. The operations center team waits for an
international armed conflict to erupt. The influx of third-party
agencies and nongovernmental organizations assisting with the natural
disaster recovery further complicates the confusion on the ground, in
the air, and at sea.
The competitor State uses this chaos as cover to achieve its objectives
in the region. Amidst the disorder, the operations center team must
parse out lawful military objectives within the swarm of State assets,
nongovernmental organization platforms, and civilian objects. A
targeting working group determines criteria for how the Combatant
Commander should evaluate the numerous civilian and military autonomous
systems flying, driving, cruising, and hovering throughout the region.
There is no playbook for evaluating autonomous systems in volatile and
uncertain environments. So, the targeting working group must combine
traditional targeting criteria with a healthy dose of imagination,
understanding of the intent behind international law, and focus on the
mission.
Numerous hypothetical scenarios demonstrate why States should combine
traditional and progressive
jus ad bellum evaluations of autonomous weapon systems (AWS) to evaluate threats posed by cutting-edge technologies. Autonomous weapon systems capabilities disrupt the traditional evaluations of State coercive acts under jus ad bellum.3 Now, States may employ AWS to conduct coercive acts that do not justify a use of force response under traditional jus ad bellum evaluations.4
Therefore, this article argues that States should update and adapt their jus ad bellum evaluative frameworks for State coercive acts to apply both 1) a traditional instrument-based evaluation and 2) a progressive effects-based evaluation when responding to AWS actions.
The rapid expansion of AWS technology will continue to outpace law and policy.5 There are tremendous incentives for both States and non-state actors6 to develop AWS.7 To take advantage of the current limitations of traditional jus ad bellum evaluations, States will deliberately employ AWS technologies whose consequences fall below the customary threshold of an armed attack.8
To keep international law in front of technology, this article provides
a framework for national security law attorneys and policy makers to
overcome these traditional limitations in evaluating AWS coercive acts.
This article proposes that States should simultaneously evaluate AWS
employment through both the lenses of traditional
jus ad bellum instrument-based evaluations9 and by adopting the progressive consequence-based evaluations.10
Updating the framework of
jus ad bellum analysis will allow attorneys and policy makers more flexibility in advising commanders on appropriate responses to AWS.
In providing this proposed framework, the article first summarizes the traditional jus ad bellum principles.11 The principle of necessity receives deeper analysis because its dimension of imminence is a key factor for the proposed framework.12 After achieving a common understanding, the article then defines levels of autonomy and outlines the impact of autonomy on traditional notions of jus ad bellum. Next, the article proposes a framework for evaluating threats posed by AWS and criteria to determine whether AWS actions justify a use of force response. Finally, three vignettes illustrate the complexity and nuances surrounding possible employment of AWS by applying the proposed framework. This article concludes that adopting this proposed framework is the best way for the law to stay in front of AWS, even as it advances at breakneck speed.
The rapid expansion of AWS technology will continue to outpace law and policy.
Jus ad Bellum
The first step in providing a new framework for evaluating AWS actions is to review the most important traditional principles of jus ad bellum. Jus ad bellum defines when States may resort to armed force.13 The Department of Defense (DoD) Law of War Manual
describes jus ad bellum principles to include: “a competent authority to order the war for a public purpose; a just cause; the means must be proportionate to the just cause; all peaceful alternatives must have been exhausted; and a right intention on the part of the just belligerent.”14
In the aftermath of World War II, the international community collectively created the United Nations (U.N.) in an attempt to regulate State action.15 Article 2(4) of the U.N. Charter prohibits States from “the threat or use of force” against other States.16 Despite the Article 2(4) prohibition, Article 51 of the U.N. Charter also authorizes the “inherent right of individual or collective self-defence if an armed attack occurs.”17 Unfortunately, since the ratification of the U.N. Charter, this Article 51 exception has often swallowed the Article 2(4) prohibition.18
Therefore, outside of the U.N. Charter, the customary right of self-defense may also determine what coercive State actions justify a use of force response under jus ad bellum.19 In practice, each State action requires a fact-specific evaluation of factors to determine whether a coercive act amounts to a “use of force.”20 And, in circumstances when a use of force has not yet occurred, then there is also “little evidence of any current agreed-upon standards” for explaining the concept of imminence.21 Therefore, for judge advocates advising commanders in dynamic environments, the most important jus ad bellum principle for evaluating coercive acts is necessity because “imminence has emerged as ‘the most problematic variable’ of anticipatory self-defense.”22
Necessity
The principle of necessity “dictates that a state may not use force unless it is left with no other viable options.”23 If an armed attack occurs, Article 51 of the U.N. Charter authorizes a State to exercise its inherent right of self-defense.24 Therefore, when evaluating the coercive acts of a State that do not yet amount to an armed attack, imminence plays an important role in determining the necessity of a response.25
Determining Imminence
The concept of imminence is a key component of evaluating the necessity
of a State’s response.26 In the 1837 Caroline case, then-U.S. Secretary of State Daniel Webster described the first commonly accepted jus ad bellum criteria27 to permit “certain forcible pre-attack responses” to an imminent threat.28 Secretary Webster’s letters argued that a State “need not sit idly by as the enemy prepares to attack; instead, a state may defend itself once attack is ‘imminent.’”29
However, the Caroline case did not create a precise evaluative framework for States to determine whether an imminent threat justified a use of force response for two reasons. First, each coercive State act must undergo a fact-specific evaluation of whether it amounts to an “imminent threat.”30 Second, each State individually interprets what type of threat it believes amounts to “instant, overwhelming, leaving no choice of means, and no moment for deliberation.”31 One of the challenges for any jus ad bellum analysis is that there is “no single adjudicator ex post.”32
Certainly, the vast majority of the international community considers specific types of coercive acts to justify a use of force response.33 However, such a justified use of force response often depends on whether the coercive act amounted to a “use of force” or an “armed attack.”34 So, the concepts of “use of force” and “armed attack” within jus ad bellum must be distinguished to understand the challenges posed in evaluating AWS actions.
Distinguishing “Use of Force” and “Armed Attack”
The inherent right of self-defense relies on an evaluation of whether a State action amounts to an “armed attack” or a “use of force.”35 Generally, an armed attack is the “physical or kinetic force applied by conventional weaponry.”36 However, international law does not simply define an “armed attack.”37 Instead, within the exception of Article 51’s inherent right of self-defense, States have independently determined different thresholds for considering a coercive act to be what it considers an “armed attack.”38 Attempts to distinguish a “use of force”39 from an “armed attack” requires a subjective evaluation of the coercive acts.40 The more similar the use of force is to an armed attack, the more likely it is that a use-of-force response to that original coercive act will be lawful.41
Does jus ad bellum change when States employ AWS? Before attempting to address that question, it is important to review some basic concepts of autonomy and appreciate the spectrum of possible AWS.
Autonomous Weapon Systems
Autonomous systems may be categorized based on their level of autonomy.42 To facilitate conceptualizing the vignettes and framework proposed later in the article, the following subsections present the three most common sources for definitions of “autonomy.”
Debating the Definition of Autonomy
The military, academia, and nongovernmental organizations are shaping the ongoing debate over the definition of “autonomy” as applied to AWS.43 First, DoD Directive (DoDD) 3000.09, Autonomy in Weapon Systems, defines autonomy as a weapon system that, “once activated, can select and engage targets without further intervention by a human operator.”44
The second, and arguably the most commonly referenced, description for autonomy relates the system’s level of automation to the decision-making cycle or the “observe, orient, decide, act (OODA) Loop.”45 When relating to the OODA Loop, semi-autonomous weapon systems are “human in the loop” systems because a human “makes the decision whether to engage a target.”46 Supervised autonomous weapon systems are “human on the loop” systems because humans are “supervising [the AWS’s] operation in real time.”47 Finally, fully autonomous weapon systems are “human out of the loop” systems because “once activated, fully autonomous weapons can search for, detect, decide to engage, and engage targets all on their own and the human cannot intervene.”48
Third, the International Committee of the Red Cross’s (ICRC) report of an expert meeting on AWS provided a much broader definition of “autonomy” than DoDD 3000.09.49 The ICRC defined “autonomy” as systems “which can act without external control and define their own actions albeit within the broad constraints or bounds of their programming and software.”50 The ICRC focused its analysis for AWS on critical functions of the weapon system, to include: “target acquisition, tracking, selection, and attack.”51
For the remainder of this article, the OODA Loop description of autonomous systems is used to relate the jus ad bellum framework to the technology.
at the time of the U.N. Charter’s creation, it was impossible for States to anticipate the technological revolution of autonomous systems and artificial intelligence
Autonomy Impacting Jus ad Bellum
The concept of autonomy is fundamental to a jus ad bellum consequence-based evaluation because factors evaluated include state involvement and military character.52 Fully autonomous (human out of the loop) AWS degrade the direct causal link between AWS coercive acts and the State decision-maker.53 For example, a fully autonomous AWS that crosses into the territory of another State based on flight navigation programmed by artificial intelligence may be less attributable to the State than a semiautonomous (human in the loop) AWS that was directed by a State agent to cross into the territory of another State.54 Simultaneous Threat Evaluations of AWS further analyzes these factors and the incorporation of autonomy.
Simultaneous Threat Evaluations of AWS
Autonomous weapon systems are capable of employing both traditional weapons and cutting-edge technologies.55
Therefore, a jus ad bellum threat analysis of AWS must undergo simultaneous evaluations for both traditional and cutting-edge threats. These simultaneous evaluations should combine the principles of both instrument-based evaluations and consequence-based evaluations.56
Traditional Instrument-Based Evaluations of AWS
The traditional determination of whether a State coercive act was an armed attack depends on an objective evaluation of the “type of coercive instrument . . . selected to attain the national objectives.”57 The prohibitive language of the U.N. Charter created an objective instrument-based evaluation for coercive acts rather than a more subjective consequence-based evaluation.58
The instrument-based evaluation “eases the evaluative process by simply
asking whether force has been used, rather than requiring a far more
difficult assessment of consequences that have resulted.”59
The traditional instrument-based evaluation attempts to create a binary
decision for the international community—either the State’s coercive act
used an instrument that constituted an armed attack or it did not.60
However, at the time of the U.N. Charter’s creation, it was impossible
for States to anticipate the technological revolution of autonomous
systems and artificial intelligence.61
For military practitioners, the DoD outlines traditional evaluation
criteria for AWS.62 Still, military practitioners also will have to deal with cutting-edge technologies employed by AWS.63 Many of these new technologies will be designed to carry out coercive acts that remain below the traditional thresholds required to justify a use of force response under the instrument-based evaluation of jus ad bellum.64 In these circumstances, States must shift from the objective instrument-based evaluation to a subjective consequence-based evaluation of AWS coercive acts.
Progressive Consequence-Based Evaluations of AWS
The consequence-based evaluation of whether a State’s coercive act constituted an armed attack depends on a subjective evaluation of the threats posed by new technologies that “focuses on both the level of harm inflicted and certain qualitative elements” of a coercive act.65 The Tallinn Manual 2.0’s consequence-based evaluation factors are: 1) severity,66 2) immediacy,67 3) directness,68 4) invasiveness,69 5) measurability of effects,70 6) military character,71 7) state involvement,72 and 8) presumptive legality.73 These factors address the spectrum of potential coercive acts.74
The Tallinn Manual 2.0’s consequence-based evaluation provides greater subjectivity and flexibility when analyzing coercive acts because it evaluates significantly more factors than the instrument-based evaluation.75 No individual factor is dispositive.76
These subjective consequence-based evaluation factors were developed to counter the emergence of computer network attacks (CNA) in the 1990s.77 Like the challenges posed by the emergence of CNA, this evaluative framework should also be used to assess emerging AWS technologies whose coercive acts fall below the traditional thresholds of armed attack. As with cyber operations,78 for AWS it “is not the instrument used that determines whether the use of force threshold has been crossed, but rather . . . the consequences of the operation and its surrounding circumstances.”79 To determine whether a CNA “fell within the more flexible consequence-based understanding of force . . ., the nature of the act’s reasonably foreseeable consequences would be assessed to determine whether they resemble those of armed coercion.”80
The application of a multi-part test (MPT) in the jus ad bellum are not unique to CNA.81 Multi-part tests, like the consequence-based evaluative framework proposed above, attempt to “clarify vague or indefinite baseline texts.”82 Professor Ashley Deeks demonstrates in Multi-Part Tests in the Jus ad Bellum, that MPTs are “the best worst option”83
where “States and scholars confront a highly contentious area of
international law where the texts and customary rules offer only limited
guidance to navigate recurring factual situations”84
and where “reaching consensus on formal amendments or supplements to the
Charter would be extremely costly and very challenging.”85
However, Professor Deeks also acknowledges the common critiques of
MPTs.86 First and foremost, proposed MPTs “lack formal status in international law.”87 Second, MPTs are often criticized for the following reasons: 1) MPTs are too indeterminate to offer real guidance;88
2) application of MPTs may facilitate unequal application of the law to
similarly situated States;89 3) MPTs bind no particular actor other than, possibly, the States that propose them;90 and 4) MPTs are “often difficult to apply and can obscure as much as they reveal.”91 In addition to these general critiques of MPTs, a specific concern exists regarding the jus ad bellum concept of imminence that “decoupling the right to self-defense from the trigger of a concrete armed attack or imminent threat thereof could open a Pandora’s box of forcible actions.”92
Therefore, applying both the instrument-based evaluations for traditional weapon systems and consequence-based evaluations for non-traditional weapons technologies should “structure and defend state uses of force in nontraditional contexts while preserving the relevance of the U.N. Charter.”93
Applying the Simultaneous Threat Evaluations
To conceptualize the application of these two evaluative frameworks, the following three vignettes 1) briefly describe a hypothetical scenario; 2) apply both the instrument-based and consequence-based evaluations to those hypothetical scenarios; and 3) discuss the impact of autonomy on those evaluations. This exercise allows the reader to explore the challenges in applying both the text of the U.N. Charter and evaluating the many factors of the consequence-based evaluative framework.
Vignette #1—Third-Party Actors
State Green and State Red are engaged in an international armed conflict (IAC) over a disputed international border.94
States Green and Red exchange cross-border artillery fire.95
State Yellow is sympathetic to State Red. But, State Yellow declared its
neutrality in the IAC between States Green and Red.96
State Red authorizes State Yellow to use State Yellow AWS inside of
State Red territory to deliver humanitarian aid to civilians affected by
the IAC.
While delivering humanitarian aid, the State Yellow AWS also
electronically jams the electromagnetic spectrum.97
The jamming degrades all electronic systems within a 5 kilometer radius
of the State Yellow AWS.98 State Yellow declares in a press release that it is only jamming the electromagnetic spectrum to protect its AWS from attack during the humanitarian aid delivery operations. However, State Green determines that State Yellow AWS are only conducting these “humanitarian aid delivery operations” to areas fewer than 5 kilometers from the disputed international border. State Yellow’s AWS jamming adversely affects both States Green and Red: civilian and military communication equipment are temporarily disabled; internet and Bluetooth technologies are degraded; and commercial and military power grids are overloaded to the point where breakers are tripped and systems reset. However, there is no permanent damage from any of the State Yellow jamming. Over the last two days, State Green has also observed State Red military forces maneuvering along the disputed international border within State Red territory while State Yellow AWS were jamming within those areas.
Instrument-Based Evaluation of Vignette #1
If State Green applies the traditional instrument-based evaluation of coercive acts by State Yellow, then State Green will likely not be able to justify a use-of-force response.99
There is no indication that State Yellow conducted an armed attack
against State Green.100 And, under the customary international law principle of necessity, there does not appear to be a threat that is “instant, overwhelming, leaving no choice of means, and no moment for deliberation.”101
State Yellow AWS have not physically breached the territorial
sovereignty of State Green.102 Traditionally, interference with the electromagnetic spectrum does not fall in line with customary notions of imminent threats.103 Further, State Yellow made a public declaration of self-defense as its purpose for electromagnetic jamming. At this stage of the IAC between States Green and Red, State Yellow’s AWS actions fly below the traditional threshold of coercive acts that an objective instrument-based evaluation would consider to justify a use-of-force response.104 Therefore, State Green should also evaluate State Yellow AWS actions under the more subjective consequence-based evaluation.
Consequence-Based Evaluation of Vignette #1
If State Green applies a consequence-based evaluation of the State Yellow AWS actions, then State Green should be justified in a use-of-force response to the electromagnetic jamming. Therefore, State Green should evaluate every factor to determine whether it would be justified under jus ad bellum to respond with a use of force.105
- Severity: Low—State Yellow AWS does not pose a physical threat to individuals or property. State Yellow’s jamming only temporarily disrupts the performance of State Green (and State Red) electronics and systems.106
-
Immediacy: High—State Yellow AWS jamming immediately affects State Green
electronics and systems in the vicinity of the disputed
international border.107
- Directness: High—There is direct causation between State Yellow’s jamming and
the adverse impact on State Green.108
- Invasiveness: Medium—State Yellow AWS jamming affects areas inside of State
Green. However, merely breaching the territorial sovereignty of a targeted
State does not “per serise to the level of a use of force.”109
- Measurability: High—Like a battle-damage assessment after an armed attack, State
Green should attempt to quantify the impact of State Yellow AWS
jamming.110
- Military Character: Low—Typically, a nexus between a coercive action and a military
operation heightens the likelihood of characterization as use of
force.111However, State Yellow is not a participant in the IAC between States Green and Red. And, State Yellow claims that the purpose of its jamming is solely for self-defense while delivering humanitarian aid.112
- State Involvement:
High—State Yellow publicly declared its involvement and its actions
are observable by the international community.
-
Presumptive Legitimacy: Low—Most forms of coercion are presumptively lawful, absent a prohibition to the contrary.113 Generally, there are no prohibitions against electromagnetic jamming.114
Next, State Green must weigh the relative importance of each of the consequence-based factors.115 No formula determines the threshold of coercive acts that justify a use force response.116 So, State Green assumes risk in reaching its conclusion.117
State Green should justify a use of force response against State Yellow
AWS by demonstrating that the factors of immediacy, directness,
invasiveness, measurability, and state involvement create consequences
that “resemble those of armed coercion.”118
In this hypothetical situation, the electromagnetic impact on State
Green would likely be sufficient to justify a proportionate119
use-of-force response to State Yellow AWS, especially if State Green is
able to tie State Yellow AWS actions to a State Red armed attack within
the IAC.
Impact of Autonomy on Vignette #1
Another layer of complexity for either evaluation is formed based on the
level of autonomy of the State Yellow AWS. For a traditional weapons
system, a State could attribute the effects of the weapon system to the
State operator of that system.120 However, for AWS, the level of autonomy directly impacts the ability of one State to attribute the effects of coercive AWS actions. For example, if State Yellow AWS is a semi-autonomous system,121 then the consequence-based evaluation factors of “state involvement” and “military character” should increase to account for the direct State Yellow control of the AWS.122 But, if the “humanitarian aid delivery operations” were conducted by a semi-autonomous system owned and operated by an independent nongovernmental organization, then the factors of “state involvement” and “military character” should decrease to account for the lack of State Yellow control of the AWS. Unfortunately for State Green, this likely decrease in attribution does not change the actual electromagnetic impacts on State Green.
Taken a step further, if State Yellow AWS employed a fully autonomous system,123 then the factors of “state involvement” and “military character” should also decrease.124 Even though State Yellow deployed the AWS, State Yellow would be able to demonstrate that the employment decisions were determined by the fully autonomous system’s artificial intelligence (AI).125 For example, the AI in a fully autonomous system would determine the time, location, duration, and frequency of electromagnetic jamming in support of the humanitarian aid delivery operations.126 All of these employment decisions would undermine direct attribution back to State Yellow.127 Vignette #1 demonstrated the challenges associated with evaluating AWS coercive acts under the direct control and acknowledgment of State actors. Next, Vignette #2 moves further down the spectrum of potential AWS actions that complicate direct attribution back to a State and does not clearly meet thresholds that justify a use of force response under jus ad bellum.
Vignette #2—Nonconsensual AWS
State Green and State Red are still in an IAC over their disputed international border. State Yellow is still neutral. However, in this hypothetical scenario, State Red remotely takes control of a State Yellow-owned business’s autonomous delivery system (ADS) through a complex cyber operation. The State Yellow-owned business did not consent to this State Red cyber operation.
The State Yellow-owned business operates an extensive network of ADS that move products across the international borders of States Green, Red, and Yellow. State Green intelligence reports indicate that, from the cyber operation, State Red can now access all of State Yellow’s ADS video feeds and global positioning system (GPS) coordinates. State Red denies any involvement in the cyber operation. Based on the travel history of the State Yellow ADS, State Red has likely gained comprehensive graphic and positional data on all major State Green cities, highways, and infrastructure. Now, despite the compromised ADS, the State Yellow-owned business refuses to voluntarily stop using its ADS in State Green.
Instrument-Based Evaluation of Vignette #2
Again, if State Green applies the traditional objective instrument-based evaluation of State Yellow ADS actions, then State Green will be unable to justify a use-of-force response. There is no armed attack and no indication of a use of force by State Yellow.128 State Yellow is also a victim of State Red’s cyber operation. The State Yellow ADS are operating within State Green territory through previous consent granted by State Green. Therefore, State Green should shift to evaluate the impacts of the cyber operation on the ADS by using the subjective consequence-based evaluation to determine if a use-of-force response against the State Yellow ADS is lawful.129
Consequence-Based Evaluation of Vignette #2
Even if State Green applies a consequence-based evaluation of State Red’s actions using State Yellow ADS, State Green will likely not be able to justify a use-of-force response against State Yellow ADS. Again, State Green should evaluate every factor before making its determination.130
- Severity: Low—There is no indication that State Yellow’s ADS pose any threat to physical injury or destruction to property.131
- Immediacy: Low—State Green cannot articulate the “immediate consequences”
posed by this cyber operation.132
- Directness: Low—There is significant attenuation between the State Red cyber
operation and any known consequences for State Green.133
- Invasiveness: Medium—The use of cyber operations to commandeer devices in the
physical domain complicates the invasiveness analysis.134However, this cyber operation may also constitute a physical harm that “crosses into the target State.”135
- Measurability: Low—State Green cannot determine the measurability of the
effects.136
- Military character: Low—State Green must determine whether there is a nexus between a
coercive action and a military operation.137Also, State Yellow ADS’s status as a fellow victim of the same State Red cyber operation attenuates the connection to military operations that would justify an attack on State Yellow ADS, despite the fact that the State Yellow ADS are the platforms collecting the data.138
-
State Involvement: Low—State Green must demonstrate a nexus between the coercive
action and state involvement. State Red denies responsibility for the cyber operation. State
Yellow’s ADS are merely the mechanisms that State Red used to
accomplish its cyber operation.
-
Presumptive Legitimacy: Low—There is no prohibition against cyber espionage.139
State Green will likely be unable to justify a use-of-force response
against State Yellow ADS because even the subjective consequence-based
evaluation factors do not create consequences that “resemble those of
armed coercion.”140 Though State Green would be unable to justify a use-of-force response, State Green could still pursue other legal and diplomatic options to prevent State Yellow ADS from future overflights.141
Impact of Autonomy on Vignette #2
Once again, the level of autonomy of the State Yellow ADS will impact
the consequence-based evaluation. On one side of the spectrum, if State
Red’s cyber operation took control of the State Yellow semi-autonomous
systems, then State Green should be able to attribute the State Yellow
ADS actions directly to State Red. This type of active control over
State Yellow ADS would be akin to commandeering a traditional military
platform or hijacking an aircraft and therefore increase the
“directness” factor of the consequence-based evaluative framework.142
At the other end of the spectrum, if State Red’s cyber operation merely
received graphic and positional data from State Yellow’s fully
autonomous systems, then State Green would have difficulty attributing
any of the State Yellow ADS actions to State Red. This type of passive
receipt of information gained by State Red from a fully autonomous
system would be more akin to information received during peacetime cyber
espionage.143 In addition to the complexity of attribution, Vignette #2 attempted to demonstrate the uncertainty in threat evaluations of AWS when their end-result mimics espionage instead of use of force. Finally, Vignette #3 evaluates another jus ad bellum threat analysis that offers some complexity based on the location of the coercive acts and the fully autonomous AWS.
Vignette #3—Unmanned Underwater Vehicles
For this hypothetical scenario, State Green and State Red are not in an IAC. However, both States Green and Red disagree about overlapping claims to their territorial waters and exclusive economic zones (EEZ)144 in the Purple Sea. Maritime commercial fishing fleets from both States Green and Red operate year-round in the Purple Sea. State Green is a capitalist democracy. State Green-flagged fishing vessels are owned and operated by private individuals or corporations. State Red is a communist dictatorship. State Red claims that its flagged fishing fleet is privatized; however, State Red maintains overall control of the licensing, operations, and employment of all State Red-flagged fishing vessels. State Red also uses a number of shell corporations to structure its control over all State Red-flagged fishing vessels and their support ships. State Green, and much of the international community, consider the State Red-flagged fishing vessels to be the proxy-navy145
for State Red.
In the last five years, a State Red fishing shell corporation invested
heavily in the research and development of unmanned underwater vehicles
(UUVs).146 State Red UUVs are fully autonomous systems deployable from both land and fishing fleet support ships. Once released into the Purple Sea, the State Red UUVs use AI to navigate underwater for periods of up to three months.147 State Red kept its AI programing top secret. The State Red fishing shell corporation declared that these UUVs were “only for the commercial purpose of tracking fish within Purple Sea.” However, these State Red UUVs are built to specifications similar to conventional military torpedoes that allow them to travel underwater up to 40 knots.148 Also, their reinforced hulls allow them to collide with objects and remain operational, even after a collision.
Unfortunately, in the last three weeks, tensions continued to flare between States Green and Red over their disputed territorial waters and EEZs. The number of collisions between State Red UUVs and State Green-flagged fishing vessels increased from two collisions in the past five years to nine collisions in the past twenty-one days. All collisions occurred within the high seas149
of Purple Sea. So far, none of the State Green fishing vessels have been
seriously damaged.150
Yesterday, State Green recovered a State Red UUV inside its own EEZ. The State Red UUV appears to have malfunctioned and sunk to the shallow seafloor. Upon further examination, State Green discovered that this State Red UUV carried an explosive payload within its hull large enough to sink a State Green warship.
Instrument-Based Evaluation of Vignette #3
If State Green applies the traditional objective instrument-based evaluation of actions by State Red, then State Green will likely be unable to justify a use-of-force response. Though there appears to be a use of force by State Red UUVs (the collisions151 between State Red UUVs and State Green-flagged fishing vessels), these actions have not risen to the level of an “armed attack” by State Red.152 The State Red UUVs are operating in the high seas of the Purple Sea and within the disputed EEZ of State Green. Therefore, State Green should shift to evaluate the impacts of the State Red UUVs by using the more subjective consequence-based evaluation to determine if a use-of-force response against the State Red UUVs is lawful.
Consequence-Based Evaluation of Vignette #3
If State Green applies a consequence-based evaluation of the State Red UUV actions, then State Green will likely be able to justify a use-of-force response. State Green should evaluate every factor before making its determination.153
- Severity: High—State Red UUV pose a threat to the physical safety of State Green mariners and fishing vessels.154 The intensity of the collisions and the potential for catastrophic loss of a fishing vessel weigh against the State Red UUV actions.155
-
Immediacy: High—State Green can articulate the “immediate consequences”
posed by these State Red UUV collisions.156
- Directness: Medium—There is some attenuation between the State Red UUV
actions and known consequences for State Green.157For example, State Red UUVs’ collisions impacted commercial fishing vessels of State Green, but not State Green warships.158 The mere presence of a State Red UUV inside the EEZ of State Green does not amount to an “armed attack.”159
- Invasiveness: Low—State Red UUVs are operating as instigators on the high seas
and within a disputed EEZ, neither of which intrude into State
Green.160
- Measurability: Medium—State Green is able to measure the effects of the specific
collisions.161However, it is more difficult for State Green to measure the overall impact of State Red UUVs operating within its EEZ and territorial waters.162
- Military character: Medium—State Green must determine whether there is a nexus
between the State Red UUV collisions and a military operation.163Also, State Red’s deliberate use of shell corporations obfuscates direct attribution to the State Red military.164
- State Involvement: Medium—State Green must demonstrate a nexus between the coercive
action and state involvement. Like military character, State Red’s use of shell corporations
intentionally degrades State Green’s ability to attribute State Red
UUV actions to State Red.
-
Presumptive Legitimacy: Medium—The U.N. Convention on the Law of the Sea (UNCLOS) regulates most actions demonstrated by this hypothetical scenario.165 However, like the U.N. Charter, UNCLOS never anticipated the employment of UUVs.166 So, there is still uncertainty as to whether UNCLOS applies to UUVs based on the definitions agreed to by the State parties.
Overall, State Green should be able to justify a proportionate167 use-of-force response against State Red UUVs because the subjective consequence-based evaluation factors created consequences that “resemble those of armed coercion.”168
Impact of Autonomy on Vignette #3
State Red UUV autonomy will significantly impact the consequence-based evaluation of State Red actions. State Green will be wading into complicated waters as it attempts to attribute State Red’s employment of fully autonomous systems to State Red.169
There is significant debate as to who is “accountable” for the actions
of fully autonomous systems—is it the politicians who decide to use
them; the commander who deploys them in the physical environment; or the
computer programmer who coded the AI software?170
Also, without access to State Red UUVs and the algorithms the AI used to
“learn”171 these actions, it will be nearly impossible for State Green to demonstrate the intent behind the collisions. Notably, when attributing actions of military or paramilitary activities to a State,172
the unique dimensions of AI were not considered as part of either the
“effective control” test set forth by the International Court of Justice
in Nicaragua173 or the “overall control test” set forth by the International Criminal Tribunal for the Former Yugoslavia in Tadić.174 For example, State Red could downplay any attribution by claiming that the State Red UUVs collided because of navigational errors in programming rather than intentionally colliding with State Green-flagged fishing vessels. Or, State Red could blame the AI software—positing that the State Red UUVs “learned”175 on its own to collide with State Green-flagged fishing vessels in an attempt to gain access to the schools of fish.
Once again, the consequence-based evaluative factors would allow State Green to consider these aspects of autonomy on attribution in ways not previously conceived by the traditional instrument-based evaluative framework. Though the subjective consequence-based evaluative framework does not provide a simple answer, it at least offers a framework for autonomy to be evaluated.
Conclusion
Combining traditional instrument-based use of force evaluations with progressive consequence-based use-of-force evaluations provides a flexible and comprehensive lens for States to evaluate jus ad bellum threats posed by AWS. In many ways, AWS are merely a new means to deliver both traditional weapons and cutting-edge technologies.176
However, their present development and likely future growth means that
States will take advantage of AWS’ unique capabilities in ways never
envisioned by the drafters of the U.N. Charter.177
Likewise, customary international law and State practice will take time
to develop.178 Therefore, like the international community’s response to the emerging threat of computer network attacks, States, academia, and nongovernmental organizations must develop and adopt a new jus ad bellum framework for evaluating AWS actions.179
Under the current instrument-based evaluative framework, States will be unable to justify a use of force response to AWS actions that do not resemble the effects of an armed attack. Adopting the dual-lens view of both 1) the objective instrument-based evaluation and 2) the subjective consequence-based evaluation for AWS actions will provide States with greater adaptability and flexibility in lawfully countering AWS actions.
Though more subjective, the consequence-based evaluations ultimately allow States to relate the impacts of AWS to a non-exhaustive list of factors.180 This article posed three vignettes in an attempt to evaluate examples of possible AWS coercive acts that are below traditional armed attack thresholds. States should build off of the lessons learned from the emerging threat of cyber operations and develop an evaluative framework for AWS threats. Tallinn Manual 2.0’s consequence-based evaluation will be more responsive to new technologies and more flexible at assessing whether AWS actions resemble armed coercive acts.181
Fortunately, a combination of these evaluations for traditional threats and future technologies creates a dual framework for States to apply to AWS. As more and more AWS fly, drive, crawl, swim, and hover into future conflict zones, this combination of both instrument-based and consequence-based evaluations will arm States with the ability to determine whether a use-of-force response to AWS is justifiable under jus ad bellum.182 TAL
Maj Collins is the Deputy Staff Judge Advocate for Special Operations Command Central.
Notes
1. Remarks at National Defense University, 2013 Daily Comp. Pres. Docs. 00361 (May 23, 2013) (commenting on the use of “lethal, targeted action against al Qaeda and its associated forces, including with remotely piloted aircraft commonly referred to as drones”). See also Heather M. Roff, Lethal Autonomous Weapons and Jus Ad Bellum Proportionality, 47 Case W. Res. J. Int’l L. 37, 37 (2015) (Rajesh Uppai, quoting Dimitry Rogozin, Russia Deployed Family of Killer Robots, for Combat and Demining in
Syria and for Counter Terrorism Operations,
Int’l Def., Sec. & Tech. (June 26, 2019), https://idstch.com/military/army/russia-developing-family-of-killer-robots-conduct-war-games/ (“We have to conduct battles without any contact, so that our boys do not die, and for that it is necessary to use war robots.”)).
2. This hypothetical situation illustrated in the introduction is fictional. Names, characters, places, events, and incidents are products of the author’s imagination or used in a fictitious manner. Any resemblance to actual persons or actual events is purely coincidental.
3. See Rebecca Crootof, Regulating New Weapons Technology: The Impact of Emerging Technologies on the Law of Armed
Conflict, in The Impact of Emerging Technologies on the Law of Armed
Conflict
9 (Eric Talbot Jensen & Ronald T.P. Alcala eds. 2019).
4. See generally Melissa K. Chan, China and the U.S Are Fighting a Major Battle Over Killer Robots and
the Future of AI, Time (Sept. 13, 2019, 9:45 AM), https://time.com/5673240/china-killer-robots-weapons (noting that the world’s superpowers are moving forward with the development of autonomous weapon systems (AWS) while simultaneously undermining attempts by the international community to regulate future development). An evaluation of non-use of force responses to coercive acts, to include countermeasures, is outside the scope of this article.
5. See Crootof, supra note 3, at 9. See also Ashley Deeks et al., Machine Learning, Artificial Intelligence, and the Use of Force by
States, 10 J. Nat’l Sec. L. & Pol’y 1, 14 (2019) (“Another significant challenge—one faced by anyone creating an algorithm that makes recommendations driven by underlying bodies of law—is the difficulty of translating broad legal rules into very precise code . . . . Efforts to transform this into code would therefore require constant debate and the ability to continuously edit and change fundamental sections of the algorithm.”) (citation omitted). But see Christopher M. Ford, Autonomous Weapons and International Law, 69 S.C. L. Rev. 413, 414 (2017) (“While autonomy may give rise to circumstances in which the application of the law is rendered uncertain or difficult, the current normative legal framework is sufficient to regulate the new technology.”); Charles P. Trumbull IV, Autonomous Weapons: How Existing Law can Regulate Future
Weapons, 34 Emory Int’l L. Rev. 533, 538 (2020) (“The pace of technological advancement and its effects on the conduct of hostilities, however, is rapidly outpacing the more glacial evolution of [international humanitarian law].”) (citation omitted).
6. Acknowledging the fact that non-state actors may also develop and deploy AWS, further analysis of non-state actors is outside the scope of this article.
7. Nathalie Weizmann, Autonomous Weapon Systems Under International Law Academy, 8
Geneva Acad. Int’l Humanitarian L. & Hum. Rts. 4-5 (2014) (commenting on the international debate regarding the number of threats and asymmetric advantages posed by the development of AWS).
8. Michael N. Schmitt, Computer Network Attack and the Use of Force in International Law:
Thoughts on a Normative Framework, 37 Colum. J. Transnat’l L. 885, 897 (1999).
9. Id. at 909.
10. Id. at 915.
11. U.S. Dep’t of Def., DoD Law of War Manual para. 1.11.1 (June 2015) (C2, Dec. 2016) [hereinafter Law of War Manual].
12. Chiara Giorgetti et al., International Litigation in Practice: The
Rules, Practice, and Jurisprudence of International Courts and
Tribunals 32
(2013).
13.
Laurie R. Blank & Gregory P. Noone, International Law and Armed
Conflict: Fundamental Principles and Contemporary Challenges in the
Law of War
15 (2013) (“Jus ad bellum is the Latin term for the law governing the resort to force.”).
14. Law of War Manual, supra note 11, para. 1.11.1. See also Roff, supra note 1, at 40 (describing the six principles of jus ad bellum as just cause, right intention, proper authority, last resort, the probability of success, and proportionality).
15. See U.N. Charter pmbl.
16. U.N. Charter art. 2, ¶ 4 (“All Members shall refrain in their international relations from the threat or use of force against the territorial integrity or political independence of any state, or in any other manner inconsistent with the Purposes of the United Nations.”).
17. U.N. Charter art. 51 (“Nothing in the present Charter shall impair the inherent right of individual or collective self-defence if an armed attack occurs against a Member of the United Nations, until the Security Council has taken measures necessary to maintain international peace and security.”).
18. Ashely Deeks, Multi-Part Tests in the Jus ad Bellum, 53 Hous. L. Rev. 1035, 1045 (2016) (“A core struggle in the jus ad bellum is between crafting a system that allows states to resort to force too readily, on the one hand, and creating a system that prohibits the use of force too comprehensively on the other.”). To complicate matters further, the “inherent right of self-defense” is not defined within the U.N. Charter. See Roff,
supra note 1, at 51 (“Often both sides to a conflict view their causes as just and both often invoke their rights of self-defense.”). See also Alan L. Schuller, Inimical Inceptions of Imminence—A New Approach to Anticipatory
Self-Defense Under the Law of Armed Conflict, 18
UCLA J. Int’l L. & Foreign Affs.
161, 166 (2014), (commenting that self-defense has been the most common justification for the use of force by States).
19. Schmitt, supra note 8, at 913. See also U.N. Charter ch. VII (the U.N. Security Council (UNSC) may authorize use of force; however, situations that receive a UNSC Resolution are outside the scope of this article).
20. Schmitt, supra note 8, at 908 (commenting that the concept of use of force is generally understood to mean armed force).
21. Schuller, supra note 18, at 168, 170 (describing the different concepts of “anticipatory,” “preventative,” and “pre-emptive” self-defense and commenting on the “several schools of thought regarding just how imminent a threat must be for a state to lawfully act first.”) (citation omitted).
22. Id. at 172. Though judge advocates should evaluate all principles of jus ad bellum, an in-depth analysis of the other principles is outside the scope of this article. See Geoffrey S. Corn, Self-defense Targeting: Blurring the Line Between the Jus ad Bellum and the Jus in Bello, 88 Int’l L. Stud. 57 (2014). See also Roff, supra note 1, at 40.
23. Schuller, supra note 18, at 167 (citing Michael N. Schmitt, Preemptive Strategies in International Law, 24 Mich. J. Int’l L. 513, 530-31 (2003) (“The principle of necessity requires that all reasonable alternatives to the use of force be exhausted . . . [and] no viable option to the use of force exists.”). U.S. Army Judge Advocate General’s Legal Ctr. & Sch., Operational
Law Handbook 4 n.9 (2012) (“To comply with the necessity criterion, States must consider the exhaustion or ineffectiveness of peaceful means of resolution, the nature of coercion applied by the aggressor State, the objectives of each party, and the likelihood of effective community intervention. In other words, force should be viewed as a ‘last resort.’”)). See also Roff, supra note 1, at 40.
24. U.N. Charter art. 51. See also Roff, supra note 1, at 44 (describing where a State “loses its right not to be harmed by threatening an imminent violation of [another State’s] rights . . . [and one State] may inflict harm on [another State] to thwart an attack against it and to potentially restore [the State’s] rights”). See also Schuller, supra note 18, at 174-85 (describing recent examples in history where States justified their actions under Article 51). The concept of anticipatory self-defense in jus ad bellum is outside the scope of this article.
25. Schuller, supra note 18, at 168.
26. Id.
27. Letter from Daniel Webster, U.S. Secretary of State, to Lord Ashburton, British Plenipotentiary (Aug. 6, 1842), https://avalon.law.yale.edu/19th_century/br-1842d.asp (“It will be for that Government to show a necessity of self-defence, instant, overwhelming, leaving no choice of means, and no moment for
deliberation
. . . since the act justified by the necessity of self-defence, must be
limited by that necessity, and kept clearly within it.”) (emphasis
added) [hereinafter Webster].
28. Deeks, supra note 18, at 1052.
29. Schuller, supra note 18, at 169 (quoting Michael N. Schmitt, Cyber Operations and the Jus Ad Bellum Revisited, 56 Vill. L. Rev. 569, 590 (2011)) (citations omitted).
30. Id. at 170.
31. Id. See also Webster, supra note 27.
32. Deeks, supra note 18, at 1049-50 (“In the international context, where there is no single adjudicator ex post, those states and other actors that assess whether a particular set of facts meet a multi-factor test may possess different facts, may take different views of those facts, and may be unable to assess the facts they have objectively because of their strong political interests in the outcome.”).
33. See generally U.N. Charter art. 51 (for example, an armed attack against a State military force is a type of coercive act explicitly listed in the U.N. Charter). See also Schuller, supra note 18, at 167-68 (“In the narrow context of a discussion regarding the concept of imminence, one must focus on an armed attack that has not yet taken place, for once an attack has already occurred imminence ad bellum is irrelevant because the right to respond in self-defense is clear.”).
34. See Schmitt, supra note 8, at 904-08. See also Roff, supra note 1, at 44 (“In conventional war, we look to the three loci of imminent harm: the state, the human combatants, and the people. . . . In practice, however, there is not a clean or clear distinction between imminent harm to a state’s interests and the people’s because, on most accounts, imminent harm is always bootstrapped to human combatants and the civilian population (if the defending military fails).”).
35. See Schmitt, supra note 8, at 904-08. See also Schuller, supra note 18, at 174.
36. Schmitt, supra note 8, at 908.
37. Schuller, supra note 18, at 174.
38. Id. at 161.
39. Tallinn Manual 2.0 on the International Law Applicable to Cyber
Operations
330-37 (Michael N. Schmitt ed., 2d ed. 2017) [hereinafter
Tallinn Manual 2.0] (“Rule 69—Definition of use of force” provides an in-depth analysis for examining what acts constitute a use of force for a cyber operation. The group of experts acknowledge outright that “[t]he United Nations Charter offers no criteria by which to determine when an act amounts to a use of force.”).
40. Schmitt, supra note 8, at 909 (“determination of whether or not the standard [for use of force] has been breached depends on the type of the coercive instrument—diplomatic, economic, or military—selected to attain the national objectives in question.”).
41. Id. at 935 (“If it is not armed force, is the CNA nevertheless a use of force as contemplated in the U.N. Charter? It is if the nature of its consequences track those consequence commonalities which characterize armed force.”). See also Tallinn Manual 2.0, supra note 39, at 333-36 (“States are likely to consider and place great weight on the following factors, inter alia, when deciding whether to characterize any operation, including a cyber operation, as a use of force. It must be emphasized that [the factors articulated in Tallinn Manual 2.0] are merely factors that influence States making use of force assessments; they are not formal legal criteria.”). The consequence-based evaluative framework discussed in this article will be drawn directly from the Tallinn Manual 2.0 factors.
42. Paul Scharre, Army of None: Autonomous Weapons and the Future of War
27-28
(2018).
43. See generally Paul Scharre & Michael Horowitz, An Introduction to Autonomy in Weapon Systems
2 (Ctr. For a New Am. Sec., Working Paper, Feb. 2015), https://www.cnas.org/publications/reports/an-introduction-to-autonomy-in-weapon-systems. See also Weizmann, supra note 7, at 4-5.
44. U.S. Dep’t of Def, Dir. 3000.09, Autonomy in Weapon Systems 13-14 (21 Nov. 2012) (C1, 8 May 2017) (creates both a general definition for autonomy and further definitions of semi-autonomous weapon systems and human-supervised autonomous weapon systems).
45. Scharre, supra note 42, at 23 (describing Air Force strategist John Boyd framework for “the cognitive process pilots go through when engaging enemy aircraft” as the “OODA Loop,” which stands for observe, orient, decide, and act).
46. Scharre, supra note 42, at 44.
47. Id. at 45.
48. Id. at 47.
49. Int’l Comm. of the Red Cross, Expert Meeting on Autonomous Weapon Systems: Technical, Military,
Legal and
Humanitarian Aspects 62 (2014) (the International Committee of the Red Cross and other nongovernmental organizations are attempting to shape the development of international law and policies as they relate to the regulation of AWS).
50. Id. (commenting on the continuum of possible systems—from “remote controlled to automated and then to autonomous”).
51. Id. (“Therefore, for a discussion of autonomous weapon systems, it may be useful to focus on autonomy in critical functions rather than autonomy in the overall weapon system. Here the key factor will be the level of autonomy in functions required to select and attack targets (i.e. critical functions), namely the process of target acquisition, tracking, selection, and attack by a given weapon system.”).
52. Tallinn Manual 2.0, supra note 39, at 336.
53. See generally Scharre, supra note 42, at 46-47.
54. Alan L. Schuller, At the Crossroads of Control: The Intersection of Artificial
Intelligence in Autonomous Weapons Systems with International
Humanitarian Law, 8 Harv. Nat’l Sec. J. 379, 388-89 (2017). A full discussion regarding the attribution of artificial intelligence to the government leaders, military units, or computer programmers is outside the scope of this article.
55. See Schmitt, supra note 8, at 913 (describing how cutting-edge weapons technologies like cyber operations were not generally contemplated by the U.N. Charter: “There was no need to look beyond armed force because intermediate forms of coercion such as CNA were not generally contemplated.”).
56. Id. at 911-17.
57. Id. at 909.
58. Id. at 911 (“Undesirable consequences fall along a continuum, but how could the criteria for placement along it be clearly expressed? In terms of severity? Severity measured by what standard of calculation? Harm to whom or what?”).
59. Id.
60. Id. at 912 (“[T]he use of force standard serves as a logical break point in categorizing the asperity of particular coercive acts. Any imprecision in this prescriptive short-hand is more than outweighed by its clarity and ease of application.”).
61. Id. (Like CNA, AWS “challenges the prevailing paradigm, for its consequences cannot easily be placed in a particular area along the community values threat continuum. . . . Its effects freely range from mere inconvenience . . . to physical destruction . . . to death.”).
62. See U.S. Dep’t of Army, Techniques Pub. 3-01.15,
Multi-Service Tactics, Techniques, and Procedures for Air and Missile
Defense
68-69 (14 Mar. 2019).
63.
See generally Norway: GPS Jamming During NATO Drills in 2018 a Big
Concern, Associated Press (Feb. 11, 2019), https://apnews.com/eb300e709dfa4c6fa9d7d65a161d698b. See generally Bryan Ripple, Enemy Drone Operators May Soon Face the Power of THOR,
Wright-Patterson A.F. Base (Sept. 24, 2019), https://www.wpafb.af.mil/News/Article-Display/Article/1969142/enemy-drone-operators-may-soon-face-the-power-of-thor/.
64. See generally TOC, Non-Lethal Weapons for Unmanned Aerial Vehicles (UAVs) and Robots Are Being Developed by the Russian
State Corporation Rostec, Bulgarian Mil. (Jan. 15, 2019), https://bulgarianmilitary.com/2019/01/15/non-lethal-weapons-for-uavs-and-robots-are-being-developed-by-the-russian-state-corporation-rostec/.
65. Tallinn Manual 2.0, supra note 39, at 333. The Tallinn Manual 2.0 was created by the International Group of Experts at the invitation of the NATO Cooperative Cyber Defense Centre of Excellence to address cyber operations. In doing so, the International Group of Experts also comprehensively created a consequence-based evaluative framework for States to use when attempting to characterize an operation as a use of force.
66. Id. at 334.
67. Id.
68. Id.
69. Id. at 334-35.
70. Id. at 335-36.
71. Id. at 336.
72. Id.
73. Id. at 336-37. See infra app. A (a graphical depiction of the spectrum of coercive acts based on the author’s interpretation of the spectrum of coercion described in Michael N. Schmitt, Computer Network Attack and the Use of Force in International Law:
Thoughts on a Normative Framework, 37 Colum. J. Transnat’l L. 885 (1999)). See also
Nat’l Sec. L. Dep’t, The Judge Advocate Gen.’s Legal Ctr. & Sch.,
U.S. Army, Operational Law Handbook 219-20
(2020) [hereinafter
Operational Law Handbook]. See also Schmitt, supra note 8, at 911-17.
74. Tallinn Manual 2.0, supra note 39, at 333-37.
75. Id. at 333-37. See also Operational Law Handbook, supra note 73, at 219-20.
76. Tallinn Manual 2.0, supra note 39, at 337.
77. Schmitt, supra note 8, at 885. See infra app. B (a graphical depiction of the proposed normative framework for CNA based on the author’s interpretation of the spectrum of coercion described in Michael N. Schmitt, Computer Network Attack and the Use of Force in International Law:
Thoughts on a Normative Framework, 37 Colum. J. Transnat’l L. 885 (1999)).
78. Tallinn Manual 2.0, supra note 39, at 564 (defining “Cyber Operation” as “The employment of cyber capabilities to achieve objectives in or through cyberspace”).
79. Id. at 328.
80. Schmitt, supra note 8, at 916.
81. Deeks, supra note 18, at 1038-39 (“This goal of preserving the traditional jus ad bellum framework while ensuring the Charter retains contemporary relevance explains why [multi-part tests] proliferate in the use of force area. States and scholars have proposed MPTs to guide decision-making about when it is permissible to use force in anticipation of an armed attack; when a state may use force inside another state to rescue its nationals; when a given cyber activity rises to the level of a use of force; when a state may use force inside another state against an organized armed group of nonstate actors; and [. . .] when a state may use nonconsensual force inside another state to suppress ongoing genocide or crimes against humanity.”).
82. Id. at 1044-47 (commenting that multi-part tests in the jus ad bellum provide (1) law specification and (2) law development (avoiding formal U.N. Charter amendments); (3) reduce the likelihood of interstate conflict; and (4) reduce transaction costs for states).
83. Id. at 1051.
84. Id. at 1047-48.
85. Id. at 1048.
86. Id. at 1048-51.
87. Id. at 1039.
88. Id. at 1048.
89. Id. at 1049.
90. Id. at 1050.
91. Id.
92. Id. at 1052-53.
93. Id. at 1040.
94. See generally U.N. Charter art. 2(4), art. 51.
95. Assume for the purpose of this hypothetical scenario that States Green and Red are engaged in an IAC where both State military forces are operating within their own territories. These armed attacks initiated within each of their own territories and affected the opposing State. The simplification of armed attack to merely cross-border artillery strikes avoids further complexities posed by the occupation of sovereign territories, which is outside the scope of this hypothetical scenario.
96. Considerations for the possible loss of neutral status for State Yellow is outside the scope of this article.
97. Joint Chiefs of Staff, Joint Pub. 3-13.1, Electronic Warfare para. I.4.h. (8 Feb. 2012) [hereinafter
JP 3-13.1]. See also John R. Hoehn, Cong.. Rsch. Serv., IF11118 , Defense Primer: Electronic Warfare (Sept. 18, 2019).
98. The jamming does not specifically target State Green; it equally degrades both States Green and Red electronics.
99. Because States Green and Red are already in an IAC, any use of force evaluation between those States should be done under the jus in bello framework. However, jus ad bellum continues to apply to actions between States Green and Yellow. This hypothetical scenario attempts to demonstrate the challenges of evaluating third-party State actions within IACs between two States.
100. See U.N. Charter art. 51. See also Schuller, supra note 18, at 168. If State Yellow AWS conducted an armed attack against State Green military forces, then State Green military forces would be able to respond under their inherent right of self-defense with options that include lethal force.
101. Webster, supra note 27.
102. State Yellow AWS are physically operating within the territory of State Red and with State Red’s consent.
103. Crootof, supra note 3, at 24 (commenting that interference with the electromagnetic spectrum does “not meet the armed attack threshold justifying recourse to physical use of force”).
104. Tallinn Manual 2.0, supra note 39, at 333.
105. See Tallinn Manual 2.0, supra note 39, at 337. When referring to these factors, the use of “high”/“medium”/“low” in the analysis is short-hand for relating the likelihood that the international community and customary international law would support a justification for a use of force response. For example, if all factors were “high,” then there is an extremely high likelihood that a State Green use of force response would be justifiable under Articles 2(4) and 51 of the U.N. Charter.
106. See id. at 334 (“[C]onsequences involving physical harm to individuals or property will in and of themselves qualify a cyber operation as a use of force. Those generating mere inconvenience or irritation will never do so.”). If the electronic jamming permanently destroyed State Green electronic systems or was the proximate cause of a fatality or destruction of property (e.g., if State Yellow jamming caused a State Green fatal aviation mishap or an explosion at an industrial plant), then the severity factor would increase dramatically.
107. See id. (“The sooner consequences manifest, the less opportunity States have to seek peaceful accommodation of a dispute or to otherwise forestall their harmful effects.”). State Yellow AWS jamming is manifesting along the disputed international border and within States Green and Red.
108. See generally JP 3-13.1, supra note 97, para. I.4.h. (describing the deliberate nature of electromagnetic jamming).
109. See Tallinn Manual 2.0, supra note 39, at 334-35 (commenting that espionage breaches territorial sovereignty, but the acts may not rise to the level of a use of force).
110. State Green should attempt to quantify the total impact—for example: “State Green was unable to use 10,000 civilian cell phones, 16 hospital life-support systems, and 2 civilian airport radar systems. The total degradation lasted 48 hours over a 3-day period of time.” Depending on how State Green quantifies the effects, then the measurability factor may increase. However, if State Green is not sophisticated enough to quantify the effects or is unwilling to acknowledge the effects (e.g., State Green may not want to admit that State Red degraded all State Green counter-battery radars), then the measurability level may decrease.
111. See Tallinn Manual 2.0, supra note 39, at 336.
112. State Green should attempt to prove a military nexus—for example, if the jamming preceded an armed attack by State Red artillery, then State Green should be able to overcome State Yellow’s assertion that its jamming is only for self-defense and so the military character level may increase.
113. See Operational Law Handbook, supra note 73, at 220. See also Tallinn Manual 2.0, supra note 39, at 336-37 (commenting that outside of an armed attack, actions that are not expressly forbidden are permitted).
114. See generally David Bosco, When Can States Jam Radio Broadcasts?, Foreign Pol’y (Oct. 5, 2012, 4:16 PM), https://foreignpolicy.com/2012/10/05/when-can-states-jam-radio-broadcasts/.
115. Tallinn Manual 2.0, supra note 39, at 337.
116. Schmitt, supra note 8, at 914. See also Schuller, supra note 18, at 168.
117. Tallinn Manual 2.0, supra note 39, at 337. For example, State Green may be subject to condemnation from the international community and possible adverse action if it was determined that its response violated the prohibition against use of force in Article 2(4) of the U.N. Charter.
118. Schmitt, supra note 8, at 916 (explaining that any use of force response would still be subject to evaluation under the principle of proportionality).
119. Schuller, supra note 18, at 167 (The jus ad bellum principle of proportionality “limits any defensive action to that necessary to defeat an ongoing attack or deter or preempt a future attack”) (citation omitted). Proportionality is the next most significant principle to evaluate the lawfulness of a State Green response to AWS actions. However, the jus ad bellum proportionality analysis is outside the scope of this article.
120. Tallinn Manual 2.0, supra note 39, at 334 (The factor of “directness” assesses the “attenuation between the initial act and its consequences . . . directness examines the chain of causation”).
121. State Yellow human decides where and when to conduct aid delivery/jamming missions.
122. Tallinn Manual 2.0, supra note 39, at 336.
123. State Yellow allows artificial intelligence to determine where and when to deliver aid based on independent algorithms.
124. Tallinn Manual 2.0, supra note 39, at 336.
125. Deeks, supra note 5, at 24 (“If a state makes a decision on the basis of machine learning, the state may not be able to identify or explain the reasoning underpinning that decision. As a result, that state may attempt to deflect scrutiny by pointing to the nature of algorithmic decision-making. A state in this position would effectively be arguing that, if it cannot foresee what an algorithm will do, it cannot be held responsible.”).
126. Scharre, supra note 42, at 46 (describing that fully autonomous systems can “search for, decide to engage, and engage targets on their own and no human can intervene.”). See also
Kelley M. Sayler, Cong.. Rsch. Serv., R45178, Artificial Intelligence
and National Security
12-13 (2020) (explaining how the military will use AI applications
“similar to those for commercial semiautonomous vehicles, which use AI
technologies to perceive the environment, recognize obstacles, fuse
sensor data, plan navigation, and even communicate with other vehicles”)
(citation omitted).
127. Schuller, supra note 54, at 396 (in discussing the “decide” phase of autonomous weapon systems, practitioners must “consider whether the link between programming and lethal kinetic action might become so diluted that we cannot reasonably say a human decided to kill”). See also Crootof, supra note 3, at 24 (discussing the possibility that “autonomous weapon systems may malfunction and commit an action that appears to be a serious violation of international humanitarian law without anyone being able to be held accountable under existing law”). For example, a number of factors influence the ability to attribute AI actions to a State: whether the AI programming was created by the government or a civilian corporation; whether the State updated the AI programming in response to other AI actions; and what level of government or industry holds itself responsible (the civilian government leadership authorizing procurement of the AI, the military leadership deploying it on the battlefield, or the computer programmers who created the AI). However, an in-depth analysis of the ability to attribute AI decision-making to States is outside the scope of this article.
128. Like Vignette #1, this hypothetical scenario considers States Green and Red to be in an IAC (subject to jus in bello). States Green and Yellow are not engaged in an armed conflict (subject to jus ad bellum). This hypothetical scenario poses a unique challenge whereby a State Red cyber operation impacts the physical systems of a neutral third-party State in coercive acts short of an “armed attack.”
129. Because of the ongoing IAC between States Green and Red, any response by State Green to State Red cyber warriors would be subject to jus in bello.
130. Tallinn Manual 2.0, supra note 39, at 337.
131. Assume for this hypothetical scenario that State Green cannot tie State Red’s use of information gathered from the State Yellow ADS to a State Red armed attack against State Green. For example, there is no evidence that State Red is using imagery and GPS coordinates from the State Yellow ADS to direct an armed attack. However, if State Green were able to demonstrate that the State Red cyber operation resulted in the destruction of State Green property by State Yellow ADS, then the severity factor would increase based on the physical damage.
132. See Tallinn Manual 2.0, supra note 39, at 334.
133. See id.
134. See id. at 335 (“[T]hough highly invasive, cyber espionage does not per se rise to the level of a use of force.”).
135. See generally Operational Law Handbook, supra note 73, at 219-20.
136. Without access to the State Red computer servers and military plans, State Green will not be able to determine which information, if any, State Red used from the cyber operation to their advantage in the IAC.
137. See Tallinn Manual 2.0, supra note 39, at 336. If State Green can tie military actions to the cyber operation, then the military factor will be high. For example, if State Red is using the video feeds and GPS coordinates from State Yellow ADS to dynamically target State Green military objectives, then temporal proximity between the observation by the State Yellow ADS of the effects of the indirect fires with the use of indirect fires by State Red would constitute an armed attack. Therefore, a use of force response to this type of armed attack would likely be justified. See also Schuller, supra note 18, at 168.
138. However, if State Green is able to demonstrate that State Red is using information from its cyber operation to dynamically target State Green in real time with conventional weapons (e.g., using the State Yellow ADS as observers for indirect fire), then the military character would increase.
139. See Operational Law Handbook, supra note 73, at 209-10.
140. Schmitt, supra note 8, at 916.
141. The non-use-of-force alternatives for State Green to prevent State Yellow ADS overflights are outside the scope of this article.
142. Tallinn Manual 2.0, supra note 39, at 334 (“[D]irectness examines the chain of causation . . . Cyber operations in which cause and effect are clearly linked are more likely to be characterized as uses of force than those which are highly attenuated.”).
143. Id. at 168-74 (providing a full description of “Rule 32—Peacetime cyber espionage”).
144. United Nations Convention on the Law of the Sea, pt. V, Dec. 10, 1982, 1833 U.N.T.S. 397 [hereinafter UNCLOS].
145. See generally Ryan Goodman, Legal Limits on Military Assistance to Proxy Forces: Pathways for
State and Official Responsibilities, Just Sec. (May 14, 2018), https://www.justsecurity.org/56272/legal-limits-military-assistance-proxy-forces-pathways-state-official-responsibility/.
146. Michael N. Schmitt & David S. Goddard, International Law and the Military Use of Unmanned Maritime
Systems, 98 Int’l Rev. Red Cross 567, 571 (2016) (providing a comprehensive evaluation of the unique challenges that unmanned maritime systems, which include UUVs, pose to existing law). An in-depth evaluation of UUVs is outside the scope of this article.
147. See Sayler, supra note 126, at 14 (discussing the U.S. Navy’s “Sea Hunter” program which is designed to “provide the Navy with the ability to autonomously navigate the open seas, swap out modular payloads, and coordinate missions with other unmanned vessels—all while providing continuous submarine-hunting coverage for months at a time”) (citation omitted).
148. MK 48 Mod 7 Common Broadband Sonar System (CBASS) Heavyweight
Torpedo,
Lockheed Martin, https://www.lockheedmartin.com/en-us/products/mk-48-mod-7-common-broadband-advanced-sonar-system-cbass-heavyweight-torpedo.html (last visited Nov. 17, 2020) (describing the physical characteristics of the Mk 48 torpedo with a speed of 28 knots).
149. UNCLOS, supra note 144, pt. VII.
150. Within the same timeframe, there have been no documented collisions between State Red UUVs and State Red-flagged fishing vessels.
151. See generally Convention on the International Regulations for Preventing Collisions at Sea, pt. A.1.(a) Nov. 20, 1972, 1050 U.N.T.S. 16 [hereinafter COLREGS] (this hypothetical assumes that both States Green and Red are signatories to the COLREGS, and therefore these collisions will be subject to the COLREGS because “[t]hese Rules shall apply to all vessels upon the high seas and in all waters connected therewith navigable by seagoing vessels”). But see Schmitt & Goddard, supra note 146, at 577 (noting that unmanned maritime systems may not be subject to UNCLOS or COLREGS).
152. Schmitt, supra note 8, at 908.
153. Tallinn Manual 2.0, supra note 39, at 337.
154. See COLREGs, supra note 151, at 22-25 (for example, State Green should emphasize that State Red UUV are in violation of the COLREGS regarding “Rule 2—Responsibility” of due regard, “Rule 6—Safe Speed,” “Rule 7—Risk of Collision,” and “Rule 8—Action to Avoid Collision”).
155. See Tallinn Manual 2.0, supra note 39, at 334. The scope, duration, and intensity of the consequences of State actions will most significantly impact the severity evaluation.
156. See id. at 334.
157. See id.
158. See UNCLOS, supra note 144, art. 29 (“‘[W]arship’ means a ship belonging to the armed forces of a State bearing the external marks distinguishing such ships of its nationality, under the command of an officer duly commissioned by the government of the State and whose name appears in the appropriate service list or its equivalent, and manned by a crew which is under regular armed forces discipline.”).
159. See id. art. 38, art. 45 (the rights of “Transit Passage” and “Innocent Passage” may allow authorized to travel through territorial waters of other States subject to specific limitations).
160. See Tallinn Manual 2.0, supra note 39, at 335.
161. State Green should be able to demonstrate the damage from the collisions and may be able to record a collision with State Red UUVs.
162. Without sufficient counter-UUV operations, it will be difficult for State Green to demonstrate the overall number of State Red UUVs deployed within its EEZ and territorial waters. Likewise, without evaluating each individual State Red UUV, it will be impossible for State Green to determine how many State Red UUV are carrying explosive payloads and whether their AI programming is for peaceful or nefarious purposes.
163. See Tallinn Manual 2.0, supra note 39, at 336 (commenting that “the use of force has traditionally been understood to imply force employed by the military or other armed forces”). If State Green can tie the actions of State Red UUVs to the State Red military, then the military factor will increase.
164. Therefore, State Green should attempt to clarify the relationship between State Red and its fishing shell corporations.
165. See Tallinn Manual 2.0, supra note 39, at 335.
166. Schmitt & Goddard, supra note 146, at 577.
167. Schuller, supra note 18, at 167.
168. Schmitt, supra note 8, at 916.
169. Deeks, supra note 5, at 24. See also Schuller, supra note 54, at 396.
170. See generally Losing Humanity: The Case against Killer Robots, Hum. Rts. Watch (Nov. 19, 2012), https://www.hrw.org/report/2012/11/19/losing-humanity/case-against-killer-robots (“If the killing were done by a fully autonomous weapon, however, the question would become: whom to hold responsible. Options include the military commander, the programmer, the manufacturer, and even the robot itself, but none of these options is satisfactory.”). See also Crootof, supra note 3, at 20.
171. Schuller, supra note 54, at 404-08 (“Agents learn by being provided data sets from which an onboard algorithm can be programmed to attain rational goals.”). A further explanation of how AI “learns” is outside of the scope of this article.
172. Goodman, supra note 145.
173. Military and Paramilitary Activities in and Against Nicaragua (Nicar. v. U.S.), Judgment, 1986 I.C.J. 14, ¶¶ 115-16 (June 27) (The International Court of Justice evaluated the United States’ participation in Nicaragua using what has been referred to as the “effective control” test which determined that “the financing, organizing, training, supplying and equipping of the contras, the selection of its military or paramilitary targets, and the planning of the whole of its operation is still insufficient in itself, on the basis of the evidence in the possession of the Court, for the purpose of attributing to the United States the acts committed by the contras in the course of their military or paramilitary operations in Nicaragua.”).
174. Prosecutor v. Tadić, Case No. IT-94-1-A, Judgment, ¶ 131 (July 15, 1999), https://www.icty.org/x/cases/tadic/acjug/en/tad-aj990715e.pdf (“In order to attribute the acts of a military or paramilitary group to a State, it must be proved that the State wields overall control over the group, not only by equipping and financing the group, but also by coordinating or helping in the general planning of its military activity.”).
175. Schuller, supra note 54, at 404-08.
176. Ford, supra note 5, at 453 (“The mechanism of control can be exercised through physical or technological means. Historically, weapons were controlled through physical means. . . . Control can be manifest across either or both vectors.”) (citations omitted).
177. Schmitt, supra note 8, at 897 (commenting on the use of CNA, that “a lesser-advantaged state hoping to seriously harm a dominant adversary must inevitably compete asymmetrically”).
178. Trumbull, supra note 5.
179. Tallinn Manual 2.0, supra 39, at 333-36.
180. Id. at 337.
181. Tallinn Manual 2.0, supra 39, at 330-37. See also, Schmitt, supra note 8, at 916.
182. Major Micah Smith, Cyber Operations I, slide 43 (Nov. 5, 2019) (unpublished PowerPoint presentation) (on file with author).