ABSTRACT. Machines play increasingly crucial roles in establishing facts in legal disputes. Some machines convey information--the images of cameras, the measurements of thermometers, the opinions of expert systems. When a litigant offers a human assertion for its truth, the law subjects it to testimonial safeguards--such as impeachment and the hearsay rule--to give juries the context necessary to assess the source's credibility. But the law on machine conveyance is confused: courts shoehorn them into existing rules by treating them as "hearsay," as "real evidence," or as "methods" underlying human expert opinions. These attempts have not been wholly unsuccessful, but they are intellectually incoherent and fail to fully empower juries to assess machine credibility. This Article seeks to resolve this confusion and offer a coherent framework for conceptualizing and regulating machine evidence. First, it explains that some machine evidence, like human testimony, depends on the credibility of a source. Just as so-called "hearsay dangers" lurk in human assertions, "black box dangers"--human and machine errors causing a machine to be false by design, inarticulate, or analytically unsound--potentially lurk in machine conveyances. Second, it offers a taxonomy of machine evidence, explaining which types implicate credibility and how courts have attempted to regulate them through existing law. Third, it offers a new vision of testimonial safeguards for machines. It explores credibility testing in the form of front-end design, input, and operation protocols; pretrial disclosure and access rules; authentication and reliability rules; impeachment and courtroom testing mechanisms; jury instructions; and corroboration rules. And it explains why machine sources can be "witnesses" under the Sixth Amendment, refocusing the right of confrontation on meaningful impeachment. The Article concludes by suggesting how the decoupling of credibility testing from the prevailing courtroom-centered hearsay model could benefit the law of testimony more broadly.
ARTICLE CONTENTS INTRODUCTION 1975 I. A FRAMEWORK FOR IDENTIFYING CREDIBILITY-DEPENDENT MACHINE EVIDENCE 1983 A. Machines as Sources Potentially in Need of Credibility 1984 Testing B. Black Box Dangers: Causes of Inferential Error from Machine Sources 1989 1. Human and Machine Causes of Falsehood by Design 1990 2. Human and Machine Causes of Inarticulateness 1992 3. Human and Machine Causes of Analytical Error 1993 II. A TAXONOMY OF MACHINE EVIDENCE 2000 A. Machine Evidence Not Dependent on Credibility 2001 1. Machines as Conduits for the Assertions of Others 2002 2. Machines as Tools 2003 3. Machine Conveyances Offered for a Purpose Other Than Proving the Truth of the Matter Conveyed 2005 B. Machine Evidence Dependent on Credibility 2006 1. "Silent Witnesses" Conveying Images 2006 2. Basic Scientific Instruments 2009 3. Computerized Business Records 2011 4. Litigation-Related Gadgetry and Software 2013 5. Other Complex Algorithms, Robots, and Advanced Artificial Intelligence 2021 III. TESTIMONIAL SAFEGUARDS FOR MACHINES 2022 A. Machine Credibility Testing 2023 1. Front-End Design, Input, and Operation Protocols 2023 2. Pretrial Disclosure and Access 2027 3. Authentication and Reliability Requirements for Admissibility 2030 4. Impeachment and Live Testimony 2035 5. Jury Instructions and Corroboration Requirements 2038 B. Machine Confrontation 2039 1. Machines as "Witnesses Against" a Criminal Defendant 2040 2. Rediscovering the Right of Meaningful Impeachment 2048 CONCLUSION 2051 INTRODUCTION
In 2003, Paciano Lizarraga-Tirado was arrested and charged with illegally reentering the United States after having been deported. (1) He admitted that he was arrested in a remote area near the United States-Mexico border, but claimed he was arrested in Mexico while awaiting instructions from a smuggler. To prove the arrest occurred in the United States, the prosecution offered the testimony of the arresting officers that they were familiar with the area and believed they were north of the border, in the United States, when they made the arrest. An officer also testified that she used a Global Positioning System (GPS) device to determine their location by satellite, and then inputted the coordinates into Google Earth. Google Earth then placed a digital "tack" on a map, labeled with the coordinates, indicating that the location lay north of the border. (2) Mr. Lizarraga-Tirado insisted that these mechanical accusations were "hearsay," out-of-court assertions offered for their truth, and thus inadmissible. The Ninth Circuit rejected his argument, even while acknowledging that the digital "tack" was a "clear assertion," such that if the tack had been manually placed on the map by a person, it would be "classic hearsay." (3) In the court's view, machine assertions--although raising reliability concerns (4)--are simply the products of mechanical processes and, therefore, aldn to physical evidence. As such, they are adequately "addressed by the rules of authentication," requiring the proponent to prove "that the evidence 'is what the proponent claims it is,'" (5) or by "judicial notice," (6) allowing judges to declare the accuracy of certain evidence by fiat.
Mr. Lizarraga-Tirado's case is emblematic of litigants' increasing reliance on information conveyed by machines. (7) While scientific instruments and cameras have been a mainstay in courtrooms for well over a century, the past century has witnessed a noteworthy rise in the "'silent testimony' of instruments." (8) By the 1940s, courts had grappled with "scientific gadgets" such as blood tests and the "Drunk-O-Meter," (9) and by the 1960s, the output of commercially used tabulating machines. (10) Courts now routinely admit the conveyances (11) of complex proprietary algorithms, some created specifically for litigation, from infrared breath-alcohol-testing software to expert systems diagnosing illness or interpreting DNA mixtures. Even discussions of the potential for robot witnesses have begun in earnest. (12)
This shift from human- to machine-generated proof has, on the whole, enhanced accuracy and objectivity in fact finding. (13) But as machines extend their reach and expertise, to the point where competing expert systems have reached different "opinions" related to the same scientific evidence, (14) a new sense of urgency surrounds basic questions about what machine conveyances are and what problems they pose for the law of evidence. While a handful of scholars have suggested in passing that "the reports of a mechanical observer" might be assertive claims implicating credibility, (15) legal scholars have not yet explored machine conveyances in depth. (16)
This Article seeks to resolve this doctrinal and conceptual confusion about machine evidence by making three contributions. First, the Article contends that some types of machine evidence merit treatment as credibility-dependent conveyances of information. Accordingly, the Article offers a framework for understanding machine credibility by describing the potential infirmities of machine sources. Just as human sources potentially suffer the so-called "hearsay dangers" of insincerity, ambiguity, memory loss, and misperception, (17) machine sources potentially suffer "black box" dangers (18) that could lead a factfinder to draw the wrong inference from information conveyed by a machine source. A machine does not exhibit a character for dishonesty or suffer from memory loss. But a machine's programming, whether the result of human coding or machine learning, (19) could cause it to utter a falsehood by design. A machine's output could be imprecise or ambiguous because of human error at the programming, input, or operation stage, or because of machine error due to degradation and environmental forces. And human and machine errors at any of these stages could also lead a machine to misanalyze an event. Just as the "hearsay dangers" are believed more likely to arise and remain undetected when the human source is not subject to the oath, physical confrontation, and cross-examination, (20) black box dangers are more likely to arise and remain undetected when a machine utterance is the output of an "inscrutable black box." (21)
Because human design, input, and operation are integral to a machine's credibility, some courts and scholars have reasoned that a human is the true "declarant" (22) of any machine conveyance. (23) But while a designer or operator might be partially epistemically or morally responsible for a machine's statements, the human is not the sole source of the claim. Just as the opinion of a human expert is the result of "distributed cognition" (24) between the expert and her many lay and expert influences, (23) the conveyance of a machine is the result of "distribut[ed] cognition between technology and humans." (26) The machine is influenced by others, but is still a source whose credibility is at issue. Thus, any rule requiring a designer, inputter, or operator to take the stand as a condition of admitting a machine conveyance should be justified based on the inability of jurors, without such testimony, to assess the black box dangers. In some cases, human testimony might be unnecessary or, depending on the machine, insufficient to provide the jury with enough context to draw the right inference. Human experts often act as "mere scrivener[s]" (27) on the witness stand, regurgitating the conveyances of machines. Their testimony might create a veneer of scrutiny when in fact the actual source of the information, the machine, remains largely unscrutinized.
Second, the Article offers a taxonomy of machine evidence that explains which types implicate credibility and explores how courts have attempted to regulate them. Not all machine evidence implicates black box dangers. Some machines are simply conduits for the assertions of others, tools facilitating testing, or...