Individualized Suspicion in the Age of Big Data

AuthorEmily Berman
PositionAssociate Professor of Law, University of Houston Law Center
Pages463-506
463
Individualized Suspicion in the
Age of Big Data
Emily Berman*
ABSTRACT: Imagine that an algorithmic computer model known to be 80
percent accurate predicts that a particular car is likely to be transporting
drugs. Does that prediction provide law enforcement probable cause to search
the car? When generated by humans, courts have consistently regarded such
evidence of statistical likelihood as insufficiently individualized to satisfy
even the most permissive legal standards—a position that has generated
decades of debate among commentators. The proliferation of artificial-
intelligence-generated predictions—predictions that will be more accurate
than humans’ and therefore more tempting to employ—requires us to revisit
this debate over use of probabilistic evidence with renewed urgency, and to
consider its implications for the use of predictive algorithms. This Article
argues that reliance on probabilistic evidence to establish the individualized
suspicion required by the Fourth Amendment, regardless of that evidence’s
statistical accuracy—i.e., how likely it is that the predictions of criminal
activity are correct—disregards fundamental interests that individualized
suspicion is meant to protect, namely respect for human dignity, preservation
of individual autonomy, and guarantees of procedural justice. So while
accuracy is a necessary element of individualized suspicion findings, this
Article contends that no level of statistical likelihood is sufficient. Further, it
argues that careful consideration of these issues has become critically
important in today’s big data world, because the shortcomings that “analog”
probabilistic evidence presents are even more pronounced in the context of
predictive algorithms.
I.INTRODUCTION ............................................................................. 464
II.INDIVIDUALIZED SUSPICION IN THEORY ........................................ 471
A.THE INDIVIDUALIZED SUSPICION REQUIREMENT ........................ 471
*
Associate Professor of Law, University of Houston Law Center. The aut hor would like to
thank Dave Fagundes, Aziz Huq, James Nelson, and D. Theodore Rave, as well as participants in
the University of Houston Law Center’s faculty workshop, the Chapman University School of Law
Junior Faculty Works-in-Progress Conference, and the Boston College Law School Junior F aculty
Roundtable for helpful comments.
464 IOWA LAW REVIEW [Vol. 105:463
B.INDIVIDUALIZED SUSPICIONS INCOHERENCE ............................. 474
III. INDIVIDUALIZED SUSPICION AND PREDICTIVE ACCURACY: WHY
PREDICTIVE ACCURACY IS NECESSARY BUT NOT SUFFICIENT ........ 478
A.THE PURPOSE OF THE INDIVIDUALIZED SUSPICION
REQUIREMENT ........................................................................ 478
B.PREDICTIVE ACCURACY AS A PROXY FOR INDIVIDUALIZED
SUSPICION ............................................................................... 482
C.THE ILLUSORY CONSTRAINING POWER OF NUMERICAL
THRESHOLDS .......................................................................... 485
D.THE CRUCIAL ROLE OF NON-PROBABILISTIC EVIDENCE ............. 487
IV.INDIVIDUALIZED SUSPICION AND ALGORITHMIC
DECISION-MAKING ........................................................................ 495
A.ALGORITHMIC DECISION-MAKING ............................................ 496
B.ALGORITHMS & INDIVIDUALIZED SUSPICION ............................ 500
V.CONCLUSION ................................................................................ 505
I. INTRODUCTION
Imagine that a law enforcement officer stops a vehicle because it rolled
through a stoplight. When the officer runs the license plate through the
computer in her squad car, it informs her that an algorithmic computer
model predicts that the car is likely to be transporting illicit drugs. The
predictive model is known to be accurate 80 percent of the time. Based on
this information, the officer looks in the car’s trunk—a search for which the
Fourth Amendment requires probable cause. The probable cause standard is
met when, based on “the factual and practical considerations of everyday life
on which reasonable and prudent men . . . act,” there is “a ‘substantial basis
for . . . conclud[ing]’ that a search would uncover evidence of wrongdoing.”1
The search reveals the predicted drugs. Has the officer violated the Fourth
Amendment, or does the highly accurate computer model’s prediction satisfy
the probable cause requirement?
This hypothetical presents a modern twist on an old debate about the
role of statistical evidence in the legal system, which asks whether and how the
law should treat purely probabilistic evidence.2 Some scholars have long
1. Illinois v. Gates, 462 U.S. 213, 231, 236 (1983) (alteration in original) (quoting Jones
v. United States, 362 U.S. 257, 271 (1960)). Probable cause requires “only the probability, and
not a prima facie showing, of criminal activity.” Id. at 235 (quoting Spinelli v. United States, 393
U.S. 410, 419 (1969)).
2. By probabilistic or statistical evidence, I refer to evidence that provides a statistical
likelihood that some fact is true. The literature on questions related to statistica l evidence is
extensive, addresses both criminal-law and civil-law topics, and spans five decades. See generally,
2020] INDIVIDUALIZED SUSPICION 465
argued that such evidence should be more widely employed.3 Courts,
however, have consistently characterized such information as too
“generalized,” insisting that “case-specific” or “individualized” evidence is
required to satisfy even the most permissive legal standards,4 notwithstanding
the fact that exactly what it means for evidence to be “individualized,” as
opposed to generalized, has proven exceedingly difficult to articulate.5
This decades-old debate intersects with the contemporary discussion
regarding the role of artificial intelligence in the legal arena, and, in
particular, the question of whether and when it is appropriate to entrust legal
decision-making to algorithms and computer models. Because such models
have been implemented in numerous decision-making contexts already,6 this
e.g., L. JONATHAN COHEN, THE PROBABLE AND THE PROVABLE (1977) (discussing the use of
probability in the judicial process); FREDERICK SCHAUER, PROFILES, PROBABILITIES, AND
STEREOTYPES (2003) (arguing that statistical evidence should be conside red in the legal system);
David Kaye, The Paradox of the Gatecrasher and Other Stories, 1979 ARIZ. ST. L.J. 101 (discussing the
value of probability estimates); Charles Nesson, The Evidence or the Event? On Judicial Proof and the
Acceptability of Verdicts, 98 HARV. L. REV. 1357 (1985) (exploring how probability affects public
acceptance of jury verdicts); Charles R. Nesson, Reasonable Doubt and Permissive Inferences: The Value
of Complexity, 92 HARV. L. REV. 1187 (1979) [hereinafter Nesson, Reasonable Doubt] (discussing
the ability of statistical evidence to assist in quantifying reasonable doubt); Michael S. Pardo, The
Paradoxes of Legal Proof: A Critical Guide, 99 B.U. L. REV. 233 (2019) (challenging the view that
standards of proof represent probabilistic thresholds and exploring the implications of that
argument for civil litigation and criminal procedure); Mike Redmayne, Exploring the Proof
Paradoxes, 14 LEGAL THEORY 281 (2008) (discussing the paradoxes that emerge if standards of
proof are conceived of merely in terms of probability); Laurence H. Trib e, Trial by Mathematics:
Precision and Ritual in the Legal Process, 84 HARV. L. REV. 1329 (1971) (identifying dangers of using
mathematical models in the legal process).
3. See generally, e.g., SCHAUER, supra note 2 (arguing for using statistically sound
generalizations); Ronald J. Bacigal, Making the Right Gamble: The Odds on Probable Cause, 74 MISS. L.J.
279, 295–304 (2004) (arguing that statistical evidence is just as valid as any other form of evidence).
4. See Jane Bambauer, Hassle, 113 MICH. L. REV. 461, 462 (2015) (conceding that a judge
would reject a warrant application to search a home based on a statistical study indicating that 60
percent of the homes in that neighborhood have illicit drugs in them).
5. For example, is a reliable study showing t hat 60 percent of State College dorm rooms
contain drugs individualized, because it specifically applies only to State College dorm rooms, or
is it generalized, because it refers to all of the dorm rooms on the State College campus? See id.
at 462–63 (positing a version of this hypothetical); see also infra Section III.B for a full discussion
of the difficulties of distinguishing between generalized and individualized evidence.
6. Models currently dec ide whether arrestees should be granted bail, see Lauryn P.
Gouldin, Disentangling Flight Risk from Dangerous ness, 2016 BYU L. REV. 837, 867–71; determine
who is eligible for a loan, see Danielle Keats Citron & Frank Pasquale, The Scored Society: Due Process
for Automated Predictions, 89 WASH. L. REV. 1, 18–30 (2014); identify credit card fraud, see Steven
Melendez, Algorithms Honed on Stock Trades Are Fighting Credit Card Fraud, FAST COMPANY (Sept. 14,
2017), https://www.fastcompany.com/40465736/how-machine-learning-is-helping-cut-credit-
card-fraud-cashshield [https://perma.cc/7V2Z-Y3PC]; suggest products to consumers, see
Shauna Mei, A.I. Can Help Us Make Quicker, Better Consumer Choices, N.Y. TIMES (Dec. 5, 2016, 3:21
AM), https://www.nytimes.com/roomfordebate/2016/12/05/is-artificial-intelligence-taking-
over-our-lives/ai-can-help-us-make-quicker-better-consumer-choices [https://perma.cc/Q8T9-
87FA]; make medical diagnoses, see Cade Metz, A.I. Shows Promise Assisting Physicians, N.Y. TIMES
(Feb. 11, 2019), https://www.nytimes.com/2019/02/11/health/artificial-intelligence-medical-

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT