War torts: accountability for autonomous weapons.

Author:Crootof, Rebecca
Position:II. The Accountability Gap B. Individual Criminal Liability through Conclusion, with footnotes, p. 1375-1402
  1. No Individual Criminal Liability

    Autonomous weapon systems will inevitably commit a serious violation of international humanitarian law without any human being acting intentionally or recklessly. Absent such willful human action, no one can--or should--be held criminally liable.

    1. The Willful Action Requirement

      Under international law and most domestic legal regimes, war crimes must be committed "willfully." (155) Depending on the type of violation, a prosecutor must demonstrate that the accused acted with the intent to commit the violation or acted recklessly. (156) In its Commentary on the Additional Protocols, the ICRC states that acting willfully includes acting with "wrongful intent" or "recklessness," which it describes as "the attitude of an agent who, without being certain of a particular result, accepts the possibility of it happening." (157) The ICRC distinguishes this from "ordinary negligence or lack of foresight," which occurs "when a man acts without having his mind on the act or its consequences (although failing to take necessary precautions, particularly failing to seek precise information, constitutes culpable negligence punishable at least by disciplinary sanctions)." (158)

      Some treaties specify the required mental element for particular war crimes. Article 130 of the Third Geneva Convention of 1949 includes in its list of grave breaches the "wilful killing [of prisoners of war], torture or inhuman treatment, including biological experiments" and "wilfully depriving a prisoner of war of the rights of fair and regular trial prescribed in [the] Convention." (159) Article 85(3) of the First Additional Protocol of 1977 similarly criminalizes certain actions, such as launching indiscriminate attacks against civilians, provided that they are committed willfully. (160) Where the requisite mental element is not codified, international courts and tribunals have often imputed a mental element based on the nature of the violation. In such circumstances, individuals have been held criminally liable only if they acted intentionally or recklessly. (161)

      Article 30 of the Rome Statute extends the willful standard to all serious violations of international humanitarian law: "Unless otherwise provided, a person shall be criminally responsible and liable for punishment for a crime within the jurisdiction of the Court only if the material elements are committed with intent and knowledge." (162) Intent to commit an action requires evidence that the accused "means to engage in the conduct"; (163) intent to produce a consequence requires evidence that the accused "means to cause that consequence or is aware that it will occur in the ordinary course of events." (164) "Knowledge," meanwhile, entails "awareness that a circumstance exists or a consequence will occur in the ordinary course of events." (165)

    2. No Direct Individual Liability

      Under international criminal law, individuals are responsible for war crimes they commit or are directly involved in committing, which might include planning or ordering the criminal act. Given the willfulness requirement, no one can currently be held directly liable for the independent and sometimes unpredictable actions of an autonomous weapon system.

      Certainly, an individual who intentionally programmed an autonomous weapon system to commit a serious violation of international humanitarian law could be prosecuted for a war crime, as could one who recklessly deployed an autonomous weapon system incapable of discriminating between lawful and unlawful targets in an urban area. (166) A commander who ordered that an autonomous weapon system be used inappropriately would also be directly liable for its actions, and a commander who became aware that an autonomous weapon system had been used or was about to be used to commit a war crime but who took no action to prevent or punish the violation would be indirectly criminally liable. (167) Those are the easy cases.

      This Article focuses instead on the hard case: whether anyone might be held accountable in the more complicated situation where no individual acts intentionally or recklessly, but an autonomous weapon system nonetheless takes action that constitutes a serious violation of international humanitarian law.

      At present, there is little sense in attempting to hold autonomous weapon systems themselves liable. Artificial intelligence has not advanced to a point where a robotic system could be said to act intentionally or recklessly. If a violation of international humanitarian law is not a war crime absent some willful action, autonomous weapon systems are currently incapable of committing war crimes. (168) Additionally, traditional justifications for individual liability in criminal law--deterrence, retribution, restoration, incapacitation, and rehabilitation--do not map well from human beings to robots. (169)

      Some analogize autonomous weapon systems to more conventional weapons, others to human soldiers. Either way, if no person acts willfully, no person can be held directly criminally liable. If an autonomous weapon system is merely another weapon in a state's arsenal, its deployer will be liable only if she intended or foresaw the reasonable likelihood of civilian harm and nonetheless used the weapon system. If instead it is analogized to a soldier going rogue, the deployer could be held directly liable only for actions that resulted in serious violations if she ordered or otherwise directly contributed to the execution of that unlawful action. Similarly, regardless of the analogy, the commanding officer and the weapon system's programmers, designers, or manufacturers could be held directly responsible only to the extent they willfully contributed to the crime's commission. (170) Assuming no one intended the violation or acted recklessly, no one can be held directly liable. (171)

    3. No Indirect Individual Liability

      In certain circumstances, military commanders and civilian superiors may be held indirectly criminally liable for a subordinate's crime. The customary doctrine of "superior responsibility" or "command responsibility" has been codified differently in different international agreements, but it is generally understood that a superior may be liable if she exercises effective control over a subordinate, knows of or has reason to know of the subordinate's actual or intended criminal acts, and fails to take necessary and reasonable measures to prevent or punish them. (172)

      This doctrine grew from the desire to address a particular kind of guilt: the failure to act to prevent a war crime or the failure to deter others from acting similarly by punishing those who do commit war crimes. Given this purpose, the main elements of this indirect form of liability--a subordinate who commits a criminal and chargeable offense, effective control, actual or constructive knowledge, and failure to act--are sensible. If there was no underlying crime, there was nothing to prevent. If a superior does not exercise effective control over another or does not know or have reason to know of that person's potential unlawful conduct, she could not have prevented that individual from acting. Finally, if the superior could have averted or discouraged unlawful actions, either by preventing or punishing the subordinate, and chooses not to, she implicitly condones and thus perhaps even surreptitiously encourages others to act similarly.

      Because the doctrine of indirect liability is premised on an individual's omissions or failure to fulfill a duty, it has an unusual mental element, in that a showing of something less than intentional or reckless action may be sufficient to establish guilt. (173) In one extreme situation, a commander was held strictly liable for his subordinates' actions, (174) but this approach has since been rejected. (175) Today, tribunals tend to apply something akin to a gross or culpable negligence standard when evaluating if superiors are indirectly liable for their subordinates' crimes. (176)

      This doctrine was never meant to create a fully independent source of liability; it rests on the assumption that there can only be indirect liability for failure to prevent or punish a criminal action for which someone else is directly liable. As a result, the elements currently required for indirect liability do not map well onto a situation where no human being acts intentionally or recklessly.

      First, superiors are responsible for a failure to prevent or punish only those actions that constitute chargeable criminal offenses (regardless of whether the subordinate is charged). (177) as autonomous weapon systems do not act willfully, they cannot be charged with a war crime. (178) They are incapable of committing a chargeable offense.

      Second, it is not clear what would constitute "effective control" over autonomous weapon systems. When a commander gives a subordinate an order, the commander remains responsible for taking necessary precautions against that subordinate committing an unlawful act; he oversees the subordinate and can punish any violation the subordinate commits. This de facto control is necessary for indirect liability: de jure control alone is insufficient if the commander cannot prevent and punish a subordinate's criminal acts. (179) But it is impossible to punish an autonomous weapon system and difficult to prevent its unforeseeable actions. Even if a commander is monitoring the system in real time (which defeats an aim of developing weapons autonomy in the first place), she will be unable to call off an unlawful attack in situations where the system employs faster-than-human reaction times in response to surprising environmental conditions. Accordingly, some have concluded that because commanders could never exercise effective control over autonomous weapon systems, their usage creates a legal loophole, allowing commanders to authorize uses of force without having to take responsibility for them. (180)

      Third, the...

To continue reading