INTRODUCTION II. ARTIFICIAL INTELLIGENCE A. Intelligent Machines B. Rational Agents C. Assessing AI Agents III. INTERNATIONAL LAW AND CYBER OPERATIONS A. Cyberspace B. International Law C. Applying International Law to Cyber Operations D. Attribution and Ambiguity IV. ARTIFICIAL INTELLIGENCE IN CYBER OPERATIONS A. Unique Considerations for AI Softbots 1. Untethered 2. Task Variance 3. Typically Non-Violent 4. Machine Learning 5. Cyberspace B. Weapons Law C. Targeting Law D. Martens Clause V. CONCLUSION I. INTRODUCTION
Quietly perched in the forested areas of the demilitarized zone between North and South Korea sits a steel killing machine, capable of dispensing automated, lethal force when necessary. This is the SGR-1 robot, a sentry robot. (1) The SGR-1, which looks like a security camera mounted on top of an automatic rifle, can detect North Korean soldiers and, in its automatic mode, engage targets with lethal force without a human operator. (2) This sentry robot is just one of several examples of what are known as lethal autonomous weapon systems (AWS). Concerns over how lethal AWS or "killer robots" would be used in armed conflicts have spurred an extensive debate about how the autonomy of a weapon system affects its legality in international law. (3)
Subsumed in the debate surrounding lethal AWS is the issue of nationstates using artificial intelligence (AI) in and outside of state conflicts. The mere mention of AI seems to conjure images of "slaughterbots" run amok in a dystopian future. (4) For the purpose of this discussion, a better starting point is Google Assistant. (5) Google Assistant is a software application that takes advantage of AI to perform its tasks more smartly than its competitors. (6) It is a software application or "app" that can be downloaded to your phone. Military AI programs could, like Google Assistant, be used to accomplish tasks without requiring advanced, weaponized architecture like the SGR-1. Instead, the AI software agent or "softbot" could exist entirely in an artificial environment defined by the physical architecture underlying the relevant cyberspace. (7) This paper will focus on this aspect of AI agents--specifically the legality of using AI to automate and accomplish tasks associated with hostile state cyber activities.
Google Assistant is a software application that uses AI to accomplish tasks, albeit benign tasks, in cyberspace. But, what is cyberspace? The answer to that question is neither intuitive nor consistent. Cyberspace has been described as a "fifth domain" of warfare with the other "natural" domains of warfare being air, land, maritime, and space. However, this definition of cyberspace is far from uniform. (8) This paper will examine different definitions of cyberspace and offer some foundational principles that help distinguish cyberspace from the other, natural domains. From these principles we can better conceptualize how AI could be used to automate certain state cyber operations in compliance with international law.
Assessing the legal issues associated with the use of AI in hostile state cyber activities requires an appreciation of what AI is and what it is not. In writing about a rapidly developing technology, the discussion is inherently limited to what exists at the time of writing and the foreseeable future. Thus, this paper focuses on legal issues dealing with applications using task-specific or "narrow" AI. (9) General AI, commonly thought of as AI "with the scale and fluidity of a human brain," is not addressed, as it is uncertain if or when this capability will be achieved. (10) However, even software applications using "narrow Al" (referred to as AI-enhanced software agents or "sofitbots") have already demonstrated the capacity to automatically defend and respond to hacking attempts. (11) While this technology is still in its infancy, nations have already expressed an interest in adapting the technology for future military use. (12)
This paper takes the position that AI softbots could comply with international law and be used in cyber operations occurring between states, without human intervention, under certain conditions. This proposition is explored in three steps. Section II begins by establishing a common understanding of AI and introduces the key concepts of design, task environment, and transparency. Section III introduces foundational principles associated with cyberspace and explores how basic international legal principles apply to hostile state cyber activities. This section concludes by recognizing that cyber activities between states exist in a "gray zone" due to factual and legal ambiguities associated with cyber operations. (13) Finally, Section IV uses these principles to discuss the challenges of employing AI softbots under international law and how AI softbots can legally be used in a variety of hostile state activities.
This section introduces the reader to the first of two conceptual constructs that are the subjects of this paper's international legal analysis: AI and cyberspace. Both of these constructs are complicated, multifaceted subjects that could easily extend beyond the present discussion. For the purposes of this legal analysis, a brief treatment of both subjects is presented in order to establish a lingua franca with the reader.
As a scientific discipline, artificial intelligence (AI) is a branch of computer science that "studies the properties of intelligence by synthesizing intelligence." (14) AI, as a discipline, attempts to understand and then replicate intelligent behavior in machines. In this endeavor, AI has benefited immensely from advances in a host of other disciplines, including psychology, linguistics, economics, neuroscience, biology, and engineering, to name a few. (15) The recent boom in AI advances has been the result of increases in computing power, the use of graphics processors capable of running parallel tasks, and the rise of large data sets available to enhance machine learning. (16)
The field of artificial intelligence consists of several subfields that contribute to the larger goal of getting a machine to behave intelligently. Of principal importance to this discussion, machine learning is a "subset of Al that includes abstruse statistical techniques that enable machines to improve at tasks with experience." (17) Within the subfield of machine learning is the even more specific "deep learning" field of study that focuses on techniques loosely modeled after the human brain. (18) Significant advances in deep learning have also contributed to many of the recent improvements in AI. (19)
The term "artificial intelligence" was coined in 1956 when it was first used at a Dartmouth conference of scientists and mathematicians. (20) However, the roots of AI, or the concept of a machine "thinking" like a human, go back much further in time, with some scholars tracing AI's history as far back as Greek myths of Hephaestus, the blacksmith god that built mechanized men. (21) That being said, there is currently no universally accepted definition of AI. (22) Generally, AI is thought to be a "computerized system that exhibits behavior that is commonly thought of as requiring intelligence." (23) However, several competing definitions exist, some of which include the requirement of robotics.
One operational definition for AI is found in the well-regarded Turing Test, proposed by Alan Turing in 1950. (24) The Turing Test views the achievements of AI in terms of how "humanly" the computer acts--whether an interrogator is unable to tell if the answers to her questions came from a person or a computer. (25) The Turing Test identified four areas necessary to provide a "satisfactory operational definition of intelligence" for AI, specifically: (1) natural language processing, (2) knowledge representation (storing information), (3) automated reasoning (using stored information to make decisions), and (4) machine learning (ability to adapt). (26) The "Total Turing Test" adds in two additional areas to further mimic the capabilities of humans, specifically: (1) computer vision (ability to perceive objects) and (2) robotics (to manipulate objects and move about). (27) The total Turing Test, however, has been criticized as too narrow a conception, with some scientists pointing out that the goal of aeronautical engineering was never to craft "machines that fly so exactly like pigeons that they can fool even other pigeons." (28)
A competing conception of artificial intelligence is offered by Professor Nils John Nilsson, a founding researcher in the field of AI, who suggests that intelligence lies on a multi-dimensional spectrum. (29) In Professor Nilsson's view, the factors to be considered are "scale, speed, degree of autonomy, and generality." (30) For example, while a simple calculator may exist on this spectrum, it exhibits substantially less autonomy than more advanced AI programs.
Associated with the broad concept of intelligence is "rationality." (31) For something to behave rationally it must have some criterion to assess the consequences of its actions. In the field of AI, this is referred to as a performance measure. (32) A performance measure is an element of the AI's programming that conveys a notion of desirability based on a comparison between the state of the previous environment and the state of the current environment. (33) A rational actor "should select an action that is expected to maximize its performance measure" given its knowledge of the environment, its history of perceptions, and any prior knowledge. (34) Performance measures can vary in complexity and can be used to express simple goals that take the form of achieved vs. not achieved or more complex models that rely on the concept of utility from economics. (35)
Apart from the performance measure and the environment, one must also consider how the agent can interact with its...
ARTIFICIAL INTELLIGENCE AND THE FIFTH DOMAIN.
|Author:||Kirk, Aaron D.|
To continue readingFREE SIGN UP
COPYRIGHT TV Trade Media, Inc.
COPYRIGHT GALE, Cengage Learning. All rights reserved.
COPYRIGHT GALE, Cengage Learning. All rights reserved.