What Works in Crime Prevention?

Published date01 August 2016
AuthorAbigail A. Fagan,Molly Buchanan
DOIhttp://doi.org/10.1111/1745-9133.12228
Date01 August 2016
RESEARCH ARTICLE
CRIME PREVENTION REGISTRIES
What Works in Crime Prevention?
Comparison and Critical Review of Three Crime Prevention
Registries
Abigail A. Fagan
Molly Buchanan
University of Florida
Research Summary
This article compares the criteria and methods applied by three registries designed to
identify what works in crime and delinquency prevention. Wediscuss and demonstrate
how variation in the methodological rigor of these processes affects the number of
interventions identified as “evidence based” and provide recommendations for future
list-making to help increase the dissemination of effective crime prevention programs
and practices.
Policy Implications
As support for evidence-based crime prevention grows, so too will relianceon what works
registries. We contend that these lists must employscientifically rigorous review criteria
and systematic review processes to protect public resources and ensure interventions
recommended for dissemination do not risk harming participants. Lists must also be
constructed and findings communicated in ways that are responsive to community
needs. Ensuring this balance will help increase public confidence in scientific methods
and ensure greater diffusion of evidence-based interventions.
Keywords
crime prevention, evidence-based interventions, program evaluation, systematic
reviews, what works
Direct correspondence to Abigail A. Fagan, Department of Sociology and Criminology & Law, University of
Florida, 3219 Turlington Hall, P.O. Box 117330, Gainesville, FL 32611-7330 (e-mail: afagan@ufl.edu).
DOI:10.1111/1745-9133.12228 C2016 American Society of Criminology 617
Criminology & Public Policy rVolume 15 rIssue 3
Research Article Crime Prevention Registries
During the past decade, researchers in the field of criminology have made
increasing calls for the use of evidence-based and effective programs, practices,
and policies to improve public health and prevent crime (Clear, 2010; Fagan
and Eisenberg, 2012; Welsh and Farrington, 2012). For example, the National Research
Council and Institute of Medicine has recommended that to preventmental, emotional, and
behavioral disorders, “federal and state agencies should prioritize the use of evidence-based
programs and promote the rigorous evaluation of prevention and promotion programs”
(O’Connell, Boat, and Warner, 2009: 373). Likewise, a National Academy of Sciences
review of the U.S. juvenile justice systems concluded that “if implemented well, evidence-
based interventions . . . reduce reoffending” and recommended the discontinuation of
“interventions that rigorous evaluation research has shown to be ineffective or harmful”
(National Research Council, 2012: 11).
Tofacilitate the use of evidence-based interventions (EBIs), public and private agencies
have created lists identifying what works and online databases to provide information about
EBIs to policy makers and practitioners. Probably the most well known of these registries in
criminology and criminal justice are the Office of Justice Programs and National Institute
of Justice’s Crime Solutions database (crimesolutions.gov/) and the Blueprints for Healthy
Youth Development list (blueprintsprograms.com). In addition, the National Registry of
Evidence-based Programs and Practices (NREPP; nrepp.sa.gov/Index.aspx), managed by
the Substance Abuse and Mental Health Services Administration (SAMHSA), rates the
effectiveness of programs intended to reduce youth substance use. Importantly,to promote
fiscal accountability of public funds, state and federal financial aid are increasingly becoming
tied to applicants’ use of interventions that appear on such lists (Burkhardt, Schroter,
Magura, Means, and Coryn, 2015; Mears and Barnes, 2010).
Despite strong support for the use of EBIs, considerable debate remains regarding how,
specifically,to determine the effectiveness of crime prevention programs, practices, and poli-
cies (Blomberg, Mestre, and Mann, 2013). Views regarding the natureand level of evidence
needed to establish effectiveness vary across list-makers, policy makers, scientists, and practi-
tioners. Reflecting this disparity,registries use different criteria and review processes to deter-
mine what works; as a result, what is considered effective on one list is often not rated as effec-
tive on a second list (Burkhardt et al., 2015; Means, Magura,Burkhart, Schroter, and Coryn,
2015; Petrosino, 2003). These discrepancies are confusing to the public and to policy mak-
ers and may undermine public confidence in scientific standards and recommendations. In
addition, some lists do not have particularly high standards regarding the rigor of evaluation
research needed in order to designate an intervention as effective (Petrosino, 2003; Wright,
Zhang, and Farabee, 2012); as a result, they may recommend interventions that are unlikely
to produce significant reductions in crime if disseminated (Mihalic and Elliott, 2015).
The goal of this article is to highlight these inconsistencies and provide recommen-
dations for reducing them. We compare the standards and processes used by Blueprints,
Crime Solutions, and NREPP to make determinations regarding the effectiveness of crime
618 Criminology & Public Policy
Fagan and Buchanan
prevention programs and practices. Although each registry provides some details regarding
its evaluation procedures, it can be challenging to understand exactly how interventions
are reviewed and to identify similarities and differences in methods and standards used
across the three sites. Our review is intended to provide clarity regarding these processes
and to evaluate the degree to which each site makes valid determinations of intervention
effectiveness. We also comparethe three registries in terms of their abilities to communicate
clearly their findings to the public. The objective of this part of the review is to consider
each list’s practical value to practitioners and policy makers who are trying to determine
which intervention, if any, to implement to reduce crime. In the final sections of the
article, we provide recommendations for advancing the EBI movement in criminology,
including suggestions for increasing consistency and rigor in standards used by the what
works registries to nominate EBIs for dissemination.
Origins of the What Works Movement in the United States
To provide some context as to why and how lists vary in their criteria for determining
effectiveness, we begin with a short history of the what works movement in the United States
(see also Welsh and Pfeffer, 2013). Recognition of the value in using EBIs to reduce crime
slowly emerged in the 1980s and early 1990s in part as a reaction to the highly publicized
Martinson (1974) report that “nothing works” to prevent offender recidivism. Subsequent
reviews of correctional evaluations (e.g., Gendreau and Ross, 1979; see also Andrews et al.,
1990; Lipsey, 2014), including an additional publication by Martinson (1979), countered
that many correctional interventions did reduce recidivism and emphasized that it was
premature to dismiss correctional treatment as ineffective. However, these publications also
acknowledged significant variation in effects across studies and that many criminal justice
interventions had not been well evaluated with rigorous scientific methods.
A much stronger push for rigorous evaluation and use of EBIs was made in the
middle-to-late 1990s beginning with the publication of the Guide for Implementing the
Comprehensive Strategy for Serious, Violent, and Chronic Juvenile Offenders (Office of
Juvenile Justice and Delinquency Prevention, 1995) and Preventing Crime: What Works,
What Doesn’t, What’s Promising (Sherman et al., 1997). The OJJDP Guide identified pro-
grams that were effective and promising in reducing juvenile delinquency. It also offered
one of the first reviews of developmental prevention programs, interventions implemented
outside of correctional facilities and primarily intended to prevent the onset of crime. The
Preventing Crime report (Sherman et al., 1997) provided an even more comprehensive
evaluation of crime prevention programs, including developmental, community-based, sit-
uational, law enforcement, and correctional interventions. In addition to listing effective,
promising, and ineffective crime prevention strategies, a notable feature of the Preventing
Crime review was the authors’ use of explicit criteria to rate systematically the methodological
rigor of evaluations, particularly threats to the internal validity of the research. The Maryland
Scale of Scientific Methods was used to rate evaluations on a “0” to “5” scale, with a score of
Volume 15 rIssue 3 619

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT