AWS and Individual Responsibility
More often than not throughout the history of organizational responsibility, it was the individuals fighting or guarding the front lines--but almost never their superiors--that were held accountable for their actions, even if their participation was only part of a larger system. This was often true for military operations as well as for police action. A recurring problem in these instances was that direct participants in criminal acts often held positions with relatively little power, yet those that designed the system were regularly able to evade criminal responsibility. This changed somewhat with the Nuremberg and Tokyo tribunals after World War II, when a number of high-level officials were prosecuted. (125) Yet even in more modern times, this phenomenon persists. There are many reasons why holding individuals responsible for crimes committed in the battle space is important. While a foundational idea of armed conflict is that--unlike in times of peace--wounding or killing other individuals is permissible, egregious behavior is nevertheless barred on the grounds of reciprocity, deterrence, and morality. (126) Some commentators have argued that because of the difficulty of justifiably holding an individual criminally responsible, it would be unethical to use AWS in warfare. (127)
The introduction of AWS creates at least two paradoxes. First, those who plan a military operation are further removed from actual combat and have less influence than they previously had. (128) Additionally, one must consider the code upon which AWS base their decisions. It is unclear how this challenge will play out in future combat operations in which it can be presumed that breaches of IHL will continue to take place. (129) A second paradox lies in the inherent tension between increased levels of autonomy--one of the hallmarks of AWS that distinguishes them from remotely operated or automated systems--on the one hand, and the difficulty of assigning responsibility on the other. Further distancing human combatants from the battle space--not only physically, but also psychologically and temporally--only exacerbates the problem. (130)
While military planners insist that a human will remain in the loop, (131) it is apparent that the current mode of remotely operating vehicles will be replaced by less direct oversight mechanisms. Several models project that a team of operators will no longer command individual combat vehicles, but rather be responsible for a much larger force. These plans are due to projected budget constraints and their technical feasibility. (132)
However, a problem arises where an operator does not have time to directly oversee a particular action that an AWS has determined to take. In such instances, the question of criminal responsibility is especially acute. Similarly, there is an issue where the actions of AWS lead to the commission of what would otherwise be considered a war crime. This is not to say that AWS will "go rogue" in the sense of contravening specific directions. (133) If a weapon system with autonomous capability were to contravene specific directions, its use would be illegal under the rules of IHL. For example, imagine an autonomous weapon system firing at a target despite its civilian nature or in a situation in which soldiers have been wounded to such an extent that they are no longer capable of fighting. The system may have been programmed to act in such situations for a variety of reasons: the cost of watching over the soldiers was too high, compared to the utility of system in other parts of the battle space, or to instill fear in onlookers. (134) Moreover, it is possible that AWS malfunction, which can happen to any weapon or weapon system. Likewise, AWS are open to tampering or other interferences. However, this is an analysis of those complex situations where the determinations of AWS are made on the basis of the underlying code and result in unforeseen consequences. (135) The Report of the Special Rapporteur on extrajudicial, summary, or arbitrary executions, correctly stated that "[a]rmed conflict and IHL often require human judgement, common sense, appreciation of the larger picture, understanding of the intentions behind people's actions, and understanding of values and anticipation of the direction in which events are unfolding." (136)
As mentioned earlier, there are marked differences between AWS on the one hand, and remotely operated and automated systems on the other. Regarding the latter, human input remains a crucial element. AWS operate in an autonomous manner. Remotely operated and automated systems retain the possibility to assign individual responsibility. AWS are built on self-selecting and self-determining systems, as the "premise underpinning automation is that the operation of the relevant device is capable of being accurately predicted based on the programming and commands inputted." (137)
One of the most important challenges posed by the introduction of AWS into the modern battle space is how to establish individual responsibility. Most legal systems require the showing of intent, (138) with some others adding a requirement of showing individual guilt. With that in mind, a range of actors should be assessed to determine whether command responsibility applies. Importantly, the "composite nature of [lethal autonomous robots] technology and the many levels likely to be involved in decisions about deployment" could "result in a potential accountability gap or vacuum." (139) In order to avoid what may also be called a system of organized irresponsibility, it will be necessary to determine--before or concomitant with further development, and certainly before any potential deployment--who can be held responsible under what circumstances. A DoD directive attempts to avoid organized irresponsibility by requiring that "[p]ersons who authorize the use of, direct the use of, or operate autonomous and semi-autonomous weapon systems must do so with appropriate care and in accordance with the law of war, applicable treaties, weapon system safety rules, and applicable rules of engagement." (140) While this approach is laudable, it does not clearly address a fundamental aspect of fully autonomous systems--namely, that a system's course of action is not necessarily completely predictable for the operator. If a system is designed in a manner that is fully predictable, then it is arguably not autonomous as the term is properly understood and defined by the DoD directive itself. (141)
Setting aside the very reasonable objection against holding nonhumans responsible, one could try to hold the program or the AWS itself responsible. At least some commentators foresee this "distant future once robots become more sophisticated and intelligent." (142) In this context, AWS may be the cause of harm, (143) yet it is questionable to attribute blame to an entity that does not possess moral agency. (144) For most legal systems, criminal culpability requires some form of moral agency, which does not exist in the case of AWS. (145) Additionally, a traditional deterrence rationale does not apply to AWS. They can neither be punished nor possess any form of moral agency. (146) As one author puts it, attempting to impute moral agency to non-humans "offends not only the notion of the rule of law, but also the more visceral human desire to find an individual accountable." (147)
There is also the matter of how to hold AWS accountable. Even if AWS possessed intellectual abilities--apart from using algorithms to act in a discretionary manner--it is questionable whether a machine will ever be able to "suffer" from any form of punishment, whatever form that may take. One alternative may be to shut off the individual autonomous weapon system. This does not solve the problem, however, as what caused the individual system to malfunction would be prevalent in all AWS based on the same code. But the high cost of shutting down an entire fleet of AWS based on the same code makes this somewhat improbable. These considerations illustrate that holding AWS responsible is a theoretical possibility, but not a useful or feasible option.
Turning to more realistic attempts to designate responsibility, it may be possible to hold accountable the scientist or programmer who developed the software upon which the AWS relied. After all, software is the ultimate foundation of the AWS's determinations. Notwithstanding a programmer acting with mens rea, the programmer's action would have to be negligent. However, holding a programmer responsible for negligence may be a contentious premise given a core characteristic of autonomy: if AWS are supposed to act according to their code and in truly autonomous fashion, they must be able to make discretionary decisions. It may not be possible to predict the behavior of the AWS software in all its manifestations given the changing nature of the battle space. (148) Any potential AWS architecture will be complex in nature. The AWS code may have different origins and react differently than expected in concert, or may act differently depending on the sensor from which it receives data. Given this complexity, it may be difficult--although not impossible--to attribute responsibility to a programmer or a number of programmers. (149) Responsibility for negligence could only be established while the system is not designed to learn independently from past behavior, or in situations where designers acted negligently in supervising the development of AWS software when it comes to discretionary decision making. (150)
Given that AWS are military tools, a natural starting point for responsibility could be the military officers who set parameters for a given engagement. (151) It is important to distinguish between direct responsibility, which arises from acts or omissions supporting the commission of IHL, and command responsibility, which involves the failure of...
The dehumanization of international humanitarian law: legal, ethical, and political implications of autonomous weapon systems.
|Position:||III. Legal Challenges to Autonomous Weapon Systems D. AWS and Individual Responsibility through VI. Conclusion, with footnotes, p. 1399-1424|
To continue readingFREE SIGN UP
COPYRIGHT TV Trade Media, Inc.
COPYRIGHT GALE, Cengage Learning. All rights reserved.
COPYRIGHT GALE, Cengage Learning. All rights reserved.