Introduction 44 I. Background on Predictive Policing 46 A. A Short History of Predictive Policing 47 B. Critiques of Predictive Policing and "Actuarial Justice" 49 i. Racial Biases 49 ii. Unchecked Error: Data, Social Science, and Cognitive 52 Biases 1. Data Entry Errors 53 2. Flawed Social Science 53 3. Cognitive Biases 54 iii. Fourth Amendment Concerns 56 II. Patternizr: The NYPD's Newest Predictive Tool 59 A. Background on Patternizr 59 i. Examples of How Patternizr Works 60 ii. Patternizr's Design 61 B. Is Patternizr "Fair"? 64 i. Patternizr and Racial Bias 65 ii. Patternizr and the Potential for Unchecked Errors 67 iii. Patternizr and Fourth Amendment Issues 70 III. Recommendations for Advocates and Policymakers 71 A. Considerations for Banning the Use of Predictive Policing Tools Such as Patternizr 72 B. Regulating the Use of Patternizr to Minimize Harm 74 i. Ensure Democratic Accountability and Transparency 74 ii. Require Disclosure of Predictive Policing Tools in Criminal Cases 75 iii. Implement Training and Procedures to Reduce the Impact of Cognitive Biases 78 iv. Acknowledge and Address Racial Disparities in Underlying Crime Data 81 Conclusion 83 INTRODUCTION
In December 2016, the New York City Police Department (NYPD) began using a new predictive policing tool called "Patternizr" to assist investigators in recognizing potential crime patterns. (1) The algorithm, built on past crime data, is currently used to spot patterns of robberies, burglaries, and grand larcenies that may have been committed by the same person or group of people. (2) The NYPD shared news of this development in February 2019 with the publication of an academic article by Patternizr's developers, Alex Chohlas-Wood, former Director of Analytics at the NYPD, and Evan Levine, Assistant Commissioner of Data Analytics at the NYPD. (3) Despite acknowledging the "growing concern that predictive policing tools may perpetuate disparate impact," Chohlas-Wood and Levine explain how they designed Patternizr to minimize bias. (4) They claim to have accomplished this goal by blinding the models to "sensitive suspect information" including race and gender, as well as "kefeping] potential proxy variables for sensitive information--particularly location--extremely coarse" in order to avoid correlation of crime patterns with "sensitive attributes." (5) They asserted that "Patternizr is a new, effective, and fair recommendation engine... [that] when used properly, encouragefs] precision policing approaches instead of widespread, heavy-handed enforcement techniques." (6) This Article considers whether the developers' goal to build a bias-free predictive policing tool is actually achievable given the limitations of its inputs--racially-biased historic criminal justice data--and its users--humans with the potential for errors and cognitive biases.
This Article further considers the problems that may arise as a result of the NYPD's use of Patternizr and attempts to evaluate whether it is a "fair," unbiased tool, as the NYPD and Patternizr developers claimed. Moreover, this Article seeks to further evaluate that claim based on the information disclosed in the Chohlas-Wood and Levine paper. This Article identifies specific areas where more information and independent review is needed to fully interrogate this claim.
In order to evaluate Patternizr, Part I reviews the extensive literature on the use of algorithms in the criminal legal system and then draws from these insights to evaluate potential issues raised by Patternizr. This Part also provides a brief background on predictive policing, tracking its evolution from computer-generated "heat maps" to increasingly sophisticated predictive models. Following this background, Part I provides an overview of racial justice and civil liberties issues raised by predictive policing in general.
Part II focuses on Patternizr, first providing background on its development and the capabilities of the software. Part II then considers whether and how Patternizr could be used in ways that run afoul of the rights of those accused of crimes, specifically looking at issues of potential for error and due process concerns, racial bias, and Fourth Amendment rights.
Part III provides recommendations for advocates to help curb the potential harms from this new predictive policing tool. It considers potential policy solutions, ranging from an outright ban of predictive policing algorithms to regulations that would increase transparency and accountability in the use of predictive policing. Further, Part III recommends methods for criminal defense attorneys to seek disclosure of the use of Patternizr in criminal cases under New York's new discovery statute, set to go into effect in January 2020.
The Article does not focus on Patternizr's potential efficacy of reducing crime or identifying individuals suspected of committing crimes. As such, traditional crime-solving efficacy measures are not used to evaluate the algorithm. Instead, this Article focuses on how the NYPD's use of Patternizr raises serious civil rights and liberties issues for those accused of crimes and how tools such as Patternizer contribute to racially-biased mass incarceration and mass surveillance of New York's communities of color.
BACKGROUND ON PREDICTIVE POLICING
Predictive policing is an umbrella term that encompasses "the application of analytical techniques... to identify likely targets for police intervention and prevent crime or solve past crime" by making statistical predictions. (7) It is "based on directed, information-based patrol; rapid response supported by fact-based prepositioning of assets; and proactive, intelligence-based tactics, strategy, and policy." (8) Its proponents argue that predictive policing can revolutionize policing, help cash-strapped departments do more with less, and drastically increase public safety. (9) Its critics, including academics and leading criminal justice reform advocacy groups, caution that "[p]redictive policing tools threaten to provide a misleading and undeserved imprimatur of impartiality for an institution that desperately needs fundamental change." (10) The following Section traces a brief history of predictive policing followed by racial justice and civil liberties concerns raised by the use of algorithms in the criminal legal system.
A Short History of Predictive Policing
Professor Andrew G. Ferguson divides predictive policing technology into three distinct generations. (11) First, police departments developed algorithms to predict the locations of property crimes. (12) Second, this evolved into a focus on predicting the locations of violent crimes, including robberies, shootings, and gang-related violence. (13) The most recent evolution, noted by Ferguson, is a shift to predictive policing tools that can forecast specific individuals who are predicted to be involved in crimes either as perpetrators or victims. (14) Professor Ferguson cautions that each generation of predictive policing tools "may be based on historical data with statistically significant correlations, but the analyses and civil liberties concerns differ." (15) Although his generation model does not include a category into which Patternizr can easily be characterized, as Patternizr neither predicts locations nor people who may be involved with future crimes, Ferguson's approach of differentiating the various generations of predictive policing tools and evaluating each for specific concerns raised is important. For example, the concerns raised regarding place-based policing programs differ from a new pattern-based tool like Patternizr. However, it is important to understand concerns about predictive policing more broadly in order to effectively analyze this new generation of tools.
Ferguson and others note that the NYPD has long sought to increase policing efficiency through the use of data and technology. (16) In 1994, the NYPD developed Compstat--computer comparison statistics--to "compile information on crimes, victims, times of day crimes took place, and other details that enable precinct officials to spot emerging crime patterns." (17) Following New York City's lead, other cities implemented various data-driven systems to better allocate policing resources, often using a form of hotspot policing where analysts plotted crime reports on a map and sent officers to the areas where crime was most concentrated. (18)
Taking hotspot policing to the next level beyond Compstat, the Los Angeles Police Department (LAPD) collaborated with academics to develop an algorithm to predict likely areas where property crimes would occur. (19) Starting in 2010, the LAPD used these predictions to deploy officers to specific areas where crimes were anticipated in the hopes of having a deterrent effect. (20) In an influential article directed at policing insiders, the LAPD Chief of Detectives and collaborating data scientist urged the law enforcement community to adopt lessons from business analytics and touted the success of the LAPD's early experiments with predictive policing. (21)
As advances in predictive policing gained national attention, the academics who developed the algorithm that predicted areas where crime was likely to occur formed PredPol, Inc., a company that sells predictive policing software to law enforcement agencies across the country. (22) The early experiments in using algorithms to predict and deter crime morphed into "a multi-million dollar business, and large-scale marketing campaign to sell predictive policing programs." (23) Other companies, such as Palantir, HunchLabs, and IBM, also sell technologies similar to PredPol's software to help police departments identify crime trends and forecast locations and offenders of future crimes. (24) Now that predictive policing is a profitable industry, developers of new predictive policing technologies may have financial incentives to trumpet claims of efficacy and...