Ai in the Courtroom: a Comparative Analysis of Machine Evidence in Criminal Trials

ARTICLES
AI IN THE COURTROOM: A COMPARATIVE ANALYSIS
OF MACHINE EVIDENCE IN CRIMINAL TRIALS
SABINE GLESS*
ABSTRACT
As artif‌icial intelligence (AI) has become more commonplace, the monitoring
of human behavior by machines and software bots has created so-called machine
evidence. This new type of evidence poses procedural challenges in criminal jus-
tice systems across the world due to the fact that they have traditionally been tai-
lored for human testimony. This article’s focus is on information proffered as
evidence in criminal trials which has been generated by AI-driven systems that
observe and evaluate the behavior of human users to predict future behavior in
an attempt to enhance safety.
A poignant example of this type of evidence stemming from data gener-
ated by a consumer product is automated driving, where driving assistants
as safety features, observe and evaluate a driver’s ability to retake control of
a vehicle where necessary. In Europe, for instance, new intelligent devices,
including drowsiness detection and distraction warning systems, will
become mandatory in new cars beginning in 2022. In the event that
human-machine interactions cause harm (e.g., an accident involving an
automated vehicle), there is likely to be a plethora of machine evidence, or
data generated by AI-driven systems, potentially available for use in a crimi-
nal trial.
It is not yet clear if and how this the data can be used as evidence in
criminal fact-f‌inding, and adversarial and inquisitorial systems approach
this issue very differently. Adversarial proceedings have the advantage of
partisan vetting, which gives both sides the opportunity to challenge
* Sabine Gless is a Professor of Criminal Law and Procedure at the University of Basel School
of Law (Switzerland) where she holds a Chair in Criminal Law and Criminal Procedure. She may
be reached at Sabine.Gless@unibas.ch. This Article is the result of her participation in NYU’s
Hauser Global Scholarship Program during the Spring of 2019. Special thanks goes to all NYU
School of Law faculty and staff for their support and the Swiss National Research Foundation for
their ongoing support and funding, in particular the NFP75 Big Data grant. Furthermore, the
author wishes to thank Gra´inne de Búrca, Sara Beale, Eric Hilgendorf, Suzanne Kim, Erin
Murphy, Richard Myers, Catherine Sharkey, Kathrin Strandburg, Thomas Weigend, Sarah Wood,
and the participants of the Emile Noe
¨lle & Hauser Workshop in March 2019 as well as the
participants of the Data, Technology and Criminal Law Workshop at Duke Law School in April
2019 for their comments. V
C 2020, Sabine Gless.
195
consumer products offered as witnesses. By contrast, inquisitorial systems
have specif‌ic mechanisms in place to introduce expert evidence recorded out-
side the courtroom, including to establish facts, which will be necessary to
thoroughly test AI.
Using the German and the U.S. federal systems as examples, this Article
highlights the challenges posed by machine evidence in criminal proceedings.
The primary area of comparison is the maintenance of trust in fact-f‌inding as
the law evolves to accommodate the use of machine evidence. This comparative
perspective illustrates the enigma of AI in the courtroom and foreshadows what
will become inevitable problems in the not-too-distant future. The Article con-
cludes that, at present, criminal justice systems are not suff‌iciently equipped to
deal with the novel and varied types of information generated by embedded AI
in consumer products. It is suggested that we merge the adversarial system’s
tools for bipartisan vetting of evidence with the inquisitorial system’s inclusion
of out-of-court statements under specif‌ic conditions to establish adequate means
of testing machine evidence.
I. INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
A. Legally Def‌ining AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
B. Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
II. MACHINE EVIDENCE GENERATED BY AI . . . . . . . . . . . . . . . . . . . . 202
A. Automated Driving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
B. Substantive Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
C. AI and the Evidentiary Cycle in Criminal Trials . . . . . . . . . 207
1. From Silent Witnesses to Digital Analytical Tools . 208
2. Digital Layers and Trustworthy Fact-Finding . . . . . 211
a. The Black Box Problem and Expert Evidence . . . . . 211
b. AI-Generated Machine Evidence. . . . . . . . . . . . . . 212
c. Consumer Products Generating Machine Evidence. 213
d. Meeting the Evidentiary Challenge . . . . . . . . . . . . 214
3. The Evidentiary Cycle and Consumer Products
Assisting Law Enforcement . . . . . . . . . . . . . . . . . . 215
III. A COMPARATIVE PERSPECTIVE OF AI IN THE COURTROOM . . . . . . . 218
A. In Pursuit of the Truth . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
B. AI in Adversarial and Inquisitive Courtrooms. . . . . . . . . . . 219
1. Machine Evidence in Modern Day Courtrooms . . 219
2. Testing Machine Evidence for Relevance and
Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
3. Use of Written Reports to Introduce Machine
Evidence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
a. Germany . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
b. United States . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
GEORGETOWN JOURNAL OF INTERNATIONAL LAW
196 [Vol. 51
4. Machine Evidence and the Need for
Contextualization and Confrontation . . . . . . . . . . 232
a. Germany . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
b. United States . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
5. Machine Evidence’s Need for Translation Through
Expert Testimony. . . . . . . . . . . . . . . . . . . . . . . . . . 239
a. Germany . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
b. United States . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
6. New Mechanisms of Contextualization and
Credibility Testing . . . . . . . . . . . . . . . . . . . . . . . . . 247
IV. CONCLUSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
I. INTRODUCTION
Automated systems capable of handling a particular task, like driving
a car, are currently def‌ined as narrow Artif‌icial Intelligence (AI). This
should be distinguished from general AI that possesses human-like cog-
nitive abilities and an experiential understanding of its environments,
coupled with the ability to process larger quantities of information at
much greater speeds than the human mind.
1
This Article focuses on
AI-driven systems that observe and evaluate the behavior of human
users in order to predict future behavior, such as safety enhancing driv-
ing systems that react automatically and autonomously to the actions
and reactions of human drivers, i.e. external information. The poten-
tial to use data generated by general AI technology in courtrooms poses
novel challenges to both substantive criminal law and criminal proce-
dure.
2
AI has the capacity to observe and assess humans’ f‌itness to con-
tribute to a wide range of cooperative actions. Will this result in
another digital evidentiary gathering tool? Is such data suff‌iciently reli-
able to be used in criminal proceedings? Could such observations
amount to a type of “machine testimony”
3
in the event of an accident?
To address these questions, one must f‌irst acknowledge that robots and
software bots—i.e., standalone machines or programs that interact with
1. For a detailed discussion on def‌initional problems around AI, see Matthew U. Scherer,
Regulating Artif‌icial Intelligence Systems: Risks, Challenges, Competencies, and Strategies, 29 HARV. J.L. &
TECH. 353, 358–69 (2016).
2. See generally Mireille Hildebrandt, Ambient Intelligence, Criminal Liability and Democracy, 2 CRIM.
L. & PHIL. 163 (2008) (discussing the impact of ambient intelligence on criminal law).
3. Andrea Roth, Machine Testimony, 126 YALE L.J. 1972, 1979 (2017).
AI IN THE COURTROOM
2020] 197

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT