A Process Evaluation of San Francisco’s Law Enforcement Assisted Diversion Program

AuthorErica Jovanna Magaña,Aili Malm,Dina Perrone
Published date01 March 2022
Date01 March 2022
Subject MatterArticles
Criminal Justice Policy Review
2022, Vol. 33(2) 148 –176
© The Author(s) 2021
Article reuse guidelines:
DOI: 10.1177/08874034211033328
A Process Evaluation of
San Francisco’s Law
Enforcement Assisted
Diversion Program
Erica Jovanna Magaña1, Dina Perrone2,
and Aili Malm2
In 2016, San Francisco (SF) implemented the Law Enforcement Assisted Diversion
(LEAD) program, a harm reduction–based pre-booking diversion system for people
who violate drug laws and/or are engaged in sex work. LEAD is set apart from existing
diversion programs, as it uses police as point of entry. Prior LEAD studies indicate
some success in reducing recidivism and improving life outcomes. However, less is
known about program implementation, including barriers and facilitators. Relying on
policy documents, interviews, and focus groups, this study describes the LEAD SF’s
development, operations, adaptations, and challenges. It also identifies the unique
context of LEAD SF that led to implementation barriers and facilitators. Results
show that SF experienced success in collaboration, relationship building, and client
connections to services but experienced challenges in securing and maintaining police
officer buy-in and keeping clear and open lines of communication regarding LEAD
goals, objectives, policies, and procedures. This led to the termination of LEAD SF
in 2020.
drug offenders, process evaluation, realistic evaluation, street-level bureaucrats,
pre-arrest diversion
1Washington State University, Pullman, USA
2California State University, Long Beach, USA
Corresponding Author:
Dina Perrone, California State University, Long Beach, 1250 Bellflower Blvd., ET 246, Long Beach,
CA 90840, USA.
Email: dina.perrone@csulb.edu
1033328CJPXXX10.1177/08874034211033328Criminal Justice Policy ReviewMagaña et al.
Magaña et al. 149
Historically, effective policing was considered a craft, in which experience equated
with success. More recently, however, a new framework—Evidence-Based Policing
(EBP)—emerged that is shaping policing and shifting the understanding of effective
policing. Under an EBP approach, “police officers and staff create, review and use the
best available evidence to inform and challenge policies, practices and decisions”
(Ratcliffe, 2018, p. 185). Strong empirical evidence helps departments persuade politi-
cians to fund programs that work and avoid spending money on activities that are
shown ineffective. But how do we assess the strength of evidence?
The Maryland Scale of Scientific Evidence or SMS is one tool used to evaluate the
methodological robustness of outcome, impact, and cost–benefit evaluations (National
Institute of Justice [NIJ], 1998). The SMS ranks evaluations on a scale from 1 to 5,
with higher scores equaling stronger evidence support. Using the SMS, the NIJ denotes
effective programs to be those with at least two Level 3 or higher-rated studies
showing effectiveness (Justice Research and Statistics Association [JRSA], 2014;
e.g., CrimeSolutions.gov).
Police departments tend to rely, then, on quantitative methods to assess if a program
is evidence based. This has significant value but is limited. Most have weak external
validity (JRSA, 2014; NIJ, 1998; Pawson & Tilley, 1997), which makes generalizing
to other contexts difficult and poses replication challenges (Campbell et al., 2019;
JRSA, 2014; Pawson & Tilley, 1997). Outcome and impact evaluations also do not
explain how or why a program worked or failed (Mears, 2010; Miller & Miller, 2015;
Pawson & Tilley, 1997).
To overcome the shortcomings of experimental research and to best understand
why programs are effective or ineffective, Pawson and Tilley (1997) put forth realistic
evaluation. Realistic evaluation moves away from experimental approaches and
instead seeks to understand the underlying mechanism and context that produced the
results. The realistic evaluation approach is more than a description of what happened
but rather how it happened. Simply stated, it seeks to identify how a program works
(or does not) and under what conditions.
One underutilized evaluation method that can help understand mechanism–context
relationships is the process evaluation (Mears, 2010; Miller & Miller, 2015). Process
evaluations provide detailed information about a program’s underlying theory, model
design, goals, objectives, operations, service delivery, quality of services, and imple-
mentation barriers and facilitators (Krisberg, 1980; Mears, 2010). They also contextu-
alize impact and outcome findings by describing how and why an intervention
experienced certain results, which is helpful for replication purposes. This article pro-
vides a process evaluation to understand program implementation for the now widely
implemented Law Enforcement Assisted Diversion (LEAD) in San Francisco (SF),
LEAD is one of the first U.S. pre-booking diversion programs that redirects indi-
viduals from criminal justice system involvement into community-based social,
health, and behavioral health services (LEAD National Support Bureau, n.d.-b). To

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT