Embodied Ai and the Direct Participation in Hostilities: a Legal Analysis

ARTICLES
“EMBODIED AI” AND THE DIRECT PARTICIPATION
IN HOSTILITIES: A LEGAL ANALYSIS
FRANCIS GRIMAL AND MICHAEL J. POLLARD*
These laws are suff‌iciently ambiguous so that I can write story after
story in which something strange happens, in which the robots don’t
behave properly, in which the robots become positively dangerous . . . .”
1
ABSTRACT
This Article questions whether, under International Humanitarian Law
(IHL), the concept of a “civilian” should be limited to humans. Prevailing
debate within IHL scholarship has largely focused on the lawfulness (or not) of
the recourse to autonomous weapons systems (AWS). However, the utilization
of embodied artif‌icial intelligence (EAI) in armed conf‌lict, has yet to feature
with any degree of prominence within the literature. An EAI is an “intelligent”
robot capable of independent decision-making and action, without any human
supervision. Predominately, the approach within the existing AWS/AI debate
remains pre-occupied in ascertaining whether the military “system” is capable of
determining/distinguishing between civilians and combatants. Furthermore,
the built-in protection mechanisms within IHL are inherently “loaded” in favor
of protecting humans from AWS, rather than vice-versa.
IHL makes a clear distinction between civilians and civilian objects.
However, increasingly advanced EAI’s will make such a distinction highly
problematic. The novel approach of this Article is twofold: to address the “EAI
lacuna” in the broader sense, and to consider the application of EAI within a
specif‌ic area of IHL: “Direct Participation in Hostilities (DPH)”. In short, can
a robot “participate”? DPH is f‌irmly grounded within the cardinal principle of
* Francis Grimal is a Reader in Public International Law, University of Buckingham, UK, and
Michael J. Pollard is a PhD Candidate in Public International Law, University of Buckingham, UK.
The authors would like to extend their sincerest thanks to Professor Christopher Waters, Dean of
Law, University of Windsor, Ontario for all his considerable advice and invaluable feedback
throughout the preparation of this Article. The authors would also like to extend their gratitude
to Alexander Keyser, Rachel Finn, and all at GJIL for their help, input and editorial suggestions,
and also to Thomas Spiegler, Editor-in-Chief of GJIL. V
C 2020, Francis Grimal & Michael J. Pollard.
1. Prolif‌ic Science-f‌iction writer Isaac Asimov discussed his much-referenced three rules of
robotics at Rise of the Robots: More Human than Human, BBC RADIO 4 (Feb. 7, 2017), https://www.
bbc.co.uk/sounds/play/b08dnr3r.
513
distinction, and proportionality assessments, in order to afford protection to the
civilian population during hostilities. Fundamentally, this Article challenges
the International Committee of the Red Cross’s (ICRC) inf‌luential guidance on
DPH. The Authors controversially submit that by continuing to follow that
guidance, civilian objects will, under some circumstances, be afforded greater
protection than human combatants.
To highlight this def‌iciency, the authors challenge the ICRC’s assertion that
civilian status must be presumed where there is doubt, and instead subscribe to
the prevailing alternative interpretation that DPH assessments need to be made
on a case-by-case basis. To address the def‌iciency, the authors add the novel
inclusion of a “Turing-like test” within DPH assessment.
A concrete example of EAI is that of a robot medic. The robot medic’s
Hippocratic duty is to protect its patient’s life. In doing so (and given a suitable
set of circumstances), the robot medic may wish to return f‌ire against an
attacker (here, the authors envisage a scenario during urbanized warfare).
Would such an action constitute DPH (?), and what would the legal parame-
ters look like in practice? Consequently, how would the attacker compute collat-
eral damage in light of neutralizing the potentially “DPHing” robot? Implicit
within such a discussion, is the removal of emotional attachments that, for
many, are innate in DPH assessments. Indeed, does the ICRC’s tripartite test
for “DPHing” contain understandable bias in favor of humanitarian
considerations?
I. INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
II. CIVILIAN PARTICIPATION IN ARMED CONFLICT . . . . . . . . . . . . . . . 523
A. Distinguishing the Civilian Population: How Does DPH Fit
into IHL? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
B. Is the ICRC Interpretive Guidance a Suitable Mechanism for
Establishing DPH?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528
1. The First Cumulative Requirement: A Threshold of
Harm Likely to Result from the Act. . . . . . . . . . . . 529
2. The Second Cumulative Requirement: A
Relationship of Direct Causation Between the Act
and the Expected Harm . . . . . . . . . . . . . . . . . . . . 531
3. The Third Cumulative Requirement: A Belligerent
Nexus Between the Act and the Hostilities
Conducted Between the Parties to an Armed
Conf‌lict . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
III. APPLYING THE TESTS TO EAIS: CAN ROBOTS PLAY A DIRECT PART
IN HOSTILITIES?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
A. The Requirement for an Additional Test . . . . . . . . . . . . . . . 537
B. Existing AI Tech . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
GEORGETOWN JOURNAL OF INTERNATIONAL LAW
514 [Vol. 51
C. Near-Term Future Tech: Driverless Vehicle Technology . . . . . 540
D. Mid-Term Future Tech: Advanced Life Support Systems . . . . 543
E. Long-Term Future Tech: Advanced Personal Assistants . . . . 548
IV. THE WIDER CONSEQUENCES OF RECOGNIZING EAI PARTICIPATION
IN ARMED CONFLICT. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
A. Robot PMCs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
B. Robot Spies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
C. Perf‌idy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558
D. Levee en Masse: Lawful Combatancy and POW Status. . . . . 559
V. CONCLUSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
I. INTRODUCTION
The recourse to embodied artif‌icial intelligence (EAI), and its lawful-
ness (or not) remains an area in need of closer forensic analysis.
2
While
there is increasing literature surrounding the use of artif‌icially intelli-
gent robots in armed conf‌lict, its focus centers on whether machines will
be capable of identifying human participation.
3
The present authors
2. EAIs have been introduced into contemporary literature surrounding the lawfulness of
military owned and operated Autonomous Weapons Systems (AWS). However, such discussions
repeatedly fail to extend the analysis to consider how international legal principles might be
affected by the introduction of civilian EAIs. See, e.g., Bonnie Docherty et al., Head the Call: A Moral
and Legal Imperative to Ban Killer Robots, HUMAN RIGHTS WATCH (2018); Heather M. Roff & David
Danks, “Trust but Verify”: The Diff‌iculty of Trusting Autonomous Weapons Systems, 17 J. MIL. ETHICS 2
(2018); NEHAL BHUTA ET AL., AUTONOMOUS WEAPONS SYSTEMS: LAW, ETHICS, POLICY (2016);
ARMIN KRISHNAN, KILLER ROBOTS: LEGALITY AND ETHICALITY OF AUTONOMOUS WEAPONS (2016).
For a lighter but in-depth investigation into AWS, see PAUL SCHARRE, ARMY OF NONE:
AUTONOMOUS WEAPONS AND THE FUTURE OF WAR (2018). In a recent recorded debate one leading
expert on AWS even refers to the fact that AWS are essentially EAIs, but nevertheless refrains
from expanding the discussion further. For Peter Asaro’s discussion, see Ariel Conn, Podcast: Six
Experts Explain the Killer Robots Debate, FUTURE OF LIFE INSTITUTE (Jul. 31, 2018), https://
futureof‌life.org/2018/07/31/podcast-six-experts-explain-the-killer-robots-debate/. The term,
EAI, has nevertheless been in use for a number of years in the general discussion surrounding AI.
See generally Hubert L. Dreyfus, Why Computers Must Have Bodies in Order to Be Intelligent, 21 REV.
METAPHYSICS 13 (1967). In contrast, Kenneth Payne f‌lips the conversation on its head in order to
distinguish (non-embodied) AI. He notes “AI is not an embodied and intensely social animal, and
does not have biologically and environmentally evolved emotions and motivations.” Kenneth
Payne, Artif‌icial Intelligence: A Revolution in Strategic Affairs? 60 SURVIVAL: GLOBAL POL. & STRATEGY
7, 27 (2018).
3. See, e.g., Docherty, supra note 2, which repeats a number of the arguments raised in the f‌irst
Human Rights Watch Report, Bonnie Docherty et al., Losing Humanity: The Case against Killer
Robots, HUMAN RIGHTS WATCH (2012). The 2012 report was largely responsible for bringing “killer
robots” to the attention of the wider public, and in it, the authors question whether a machine
would ever be able to recognize the difference between a lawfully targetable combatant and a
child armed with only a toy gun. For a discussion in opposition to this, which forwards, for
EMBODIED AI
2020] 515

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT