Machine Learning Weapons and International Humanitarian Law: Rethinking Meaningful Human Control

MACHINE LEARNING WEAPONS AND
INTERNATIONAL HUMANITARIAN LAW:
RETHINKING MEANINGFUL HUMAN CONTROL
SHIN-SHIN HUA*
ABSTRACT
AI’s revolutionizing of warfare has been compared to the advent of the nu-
clear bomb. Machine learning technology, in particular, is paving the way for
future automation of life-or-death decisions in armed conf‌lict.
But because these systems are constantly “learning,” it is diff‌icult to predict
what they will do or understand why they do it. Many therefore argue that they
should be prohibited under international humanitarian law (IHL) because
they cannot be subject to meaningful human control.
But in a machine learning paradigm, human control may become unneces-
sary or even detrimental to IHL compliance. In order to leverage the potential
of this technology to minimize casualties in conf‌lict, an unthinking adherence
to the principle of “the more control, the better” should be abandoned.
Instead, this Article seeks to def‌ine prophylactic measures that ensure
machine learning weapons can comply with IHL rules. Further, it explains
how the unique capabilities of machine learning weapons can facilitate a more
robust application of the fundamental IHL principle of military necessity.
I. INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
II. OVERVIEW OF THE TECHNOLOGY . . . . . . . . . . . . . . . . . . . . . . . . 121
A. Def‌ining Autonomous Weapons Systems . . . . . . . . . . . . . . . 121
1. Human-Machine Interactions . . . . . . . . . . . . . . . . 122
2. The Task Performed . . . . . . . . . . . . . . . . . . . . . . . 122
B. Machine Learning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
1. Deep Learning. . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
2. Reinforcement Learning . . . . . . . . . . . . . . . . . . . . 125
3. Legally Relevant Attributes of Machine Learning
Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
* Advanced Master of Laws (Leiden University, Netherlands), JD-equivalent (University of
Cambridge); Research Aff‌iliate, Centre for the Study of Existential Risk, University of Cambridge;
Attorney, BT Group; former Attorney at Cleary Gottlieb Steen & Hamilton. With thanks to Horst
Fischer, Professor of International Humanitarian Law at Leiden University and Adjunct Professor
of International and Public Affairs at Columbia University, and Jacob Turner, barrister at
Fountain Court Chambers in London, for their valuable guidance and feedback. All opinions are
those of the author and do not ref‌lect the views of any organization. V
C 2020, Shin-Shin Hua.
117
C. Current Military Uses of Machine Learning Systems. . . . . . . 126
III. ARE LEARNING AUTONOMOUS WEAPONS SYSTEMS “UNLAWFULLY
AUTONOMOUS”? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
A. Unpredictability and IHL. . . . . . . . . . . . . . . . . . . . . . . . . 127
B. Precautionary Obligations Under IHL . . . . . . . . . . . . . . . . 128
C. Autonomous Weapons Systems and the Duty of Constant Care 129
D. Meaningful Human Control and the Duty of Constant Care. 131
1. The “Meaningful Human Control” (MHC)
Doctrine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
2. The Requirement of Ex Ante Human Approval . . . 132
3. Learning Autonomous Weapons Systems May
Comply Better With IHL Without Ex Ante Human
Approval. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
a. Big Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
b. Inscrutability . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
c. Margulies’s Dynamic Diligence Theory . . . . . . . . . 136
d. Schuller’s Reasonable Predictability Theory . . . . . . 139
e. Beyond Meaningful Human Control . . . . . . . . . . 140
f. Beyond Reasonable Predictability . . . . . . . . . . . . . 142
i. “Optimal Predictability” and Necessity 143
ii. “Optimal Predictability” and Feasibility 143
IV. CONCLUSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
I. INTRODUCTION
Machine learning is the buzzword of our age. Instead of relying on
pre- programming, these systems can “learn” how to do a task through
training, use, and user feedback.
1
Having revolutionized f‌ields from
medicine to f‌inance, machine learning is propelling a new artif‌icial
intelligence (AI) arms race among the world’s major military powers to
deploy these technologies in warfare.
2
Indeed, the rise of military AI
has been compared to the advent of the nuclear bomb.
3
1. STEPHAN DE SPIEGELEIRE, MATTHIJS MAAS & TIM SWEIJS, ARTIFICIAL INTELLIGENCE AND THE
FUTURE OF DEFENSE: STRATEGIC IMPLICATIONS FOR SMALL- AND MEDIUM-SIZED FORCE PROVIDERS 35–
39 (2017).
2. America v China-The Battle for Digital Supremacy, THE ECONOMIST (Mar. 15, 2018), https://
www.economist.com/leaders/2018/03/15/the-battle-for-digital-supremacy; Karla Lant, China,
Russia and the US Are in an Artif‌icial Intelligence Arms Race, FUTURISM (Sept. 12, 2017), https://
futurism.com/china-russia-and-the-us-are-in-an-artif‌icial-intelligence-arms-race.
3. Tom Simonite, AI Could Revolutionize War as Much as Nukes, WIRED (July 19, 2017), https://
www.wired.com/story/ai-could-revolutionize-war-as-much-as-nukes/.
GEORGETOWN JOURNAL OF INTERNATIONAL LAW
118 [Vol. 51

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT