[CRIME]?”) with a Likert-type response scale such as 1 ¼very unlikely to 7 ¼very likely and
numeric questions (e.g., “What is the percent chance that you will be caught if you commit
[CRIME]?”) with a numerical scale (e.g., 0%–100%) and responses on the ratio level.
The measures employed in the extant literature vary extensively and often appear without jus-
tification. Some studies used verbal measures, others used numeric scales ranging from 0 to 100, and
still others have used a mixed 11-point scale whereby each category signals a 10%increase in
probability from 0 to 100, despite verbal labels. Some of literature’s diversity of measurement may
stem from its variety of data sources. Many studies of apprehension risk have examined American
college students (Kamerdze et al., 2014; Loughran et al., 2014; McGloin & Thomas, 2016; Paternos-
ter et al., 2017; Thomas et al., 2018), while another substantial portion have used more general
samples from European nations, particularly Russia and Ukraine (Averdijk et al., 2016; Kroneberg
et al., 2010; Tittle et al., 2011). Nevertheless, the single most influential source of data in this
literature is the Pathways to Desistance study, a longitudinal study of serious adolescent offenders
transitioning from adolescence into early adulthood in Maricopa County, Arizona and Philadelphia
County, Pennsylvania. Table 1 provides a summary of recent measures of apprehension risk in the
criminological literature. Nearly one third of the studies use the Pathways data.
Studies that have employed numeric measures (e.g., Kamerdze et al., 2014; Paternoster et al.,
2017; Pogarsky et al., 2017) typically assume respondents’ reported perceived probabilities of
apprehension are precise, literal, and durable numeric estimates (see Thomas et al., 2018, p. 60).
If this assumption holds true, these estimates would have several beneficial properties. They would
be situated on a well-defined absolute scale (0–100), allow a respondent’s answer on Event A to be
compared to their answer to Event B (i.e., internal consistency), and also allow comparison to other
respondents’ answers on the probability of Events A and B (i.e., interpersonal comparability;
Manski, 2004, p. 1339; see also Thomas et al., 2018). Since most perceptual deterrence research
is fundamentally concerned with compa risons within and between individuals, it is difficult to
understate the importance of this assumption.
If numeric responses are not the natural decision-making metric for many individuals—that is,
the metric that people naturally use when thinking about apprehension risk—then respondents may
be forced into a more complex response processes like intensity matching (Kahneman, 2011). This
may in turn increase measurement error (see Holbrook et al., 2000; Sweitzer & Shulman, 2018).
Such a process might vary across respondents in accordance with individual differences in intelli-
gence or numeracy, creating systematic biases, so that mea surement errors are larger for some
groups of respondents than others. It may also vary within respondents over time, if situational
factors influence how respondents think about and estimate sanction risk. If systematic measurement
errors occur, their existence may help explain why research suggests perceived risk of apprehension
appears to be a weak predictor of intentions to offend overall (Nagin, 1998; Paternoster, 1987, 2010;
Pratt et al., 2006) and why perceived sanction risk is weakly correlated with objective risk (Kleck
et al., 2005).
Several recent experiments in our field have examined the substantive qualities of subjective
beliefs about apprehension risk, finding these beliefs are intuitive but still coherent within individ-
uals (Pickett et al., 2018; Pogarsky et al., 2017; Thomas et al., 2018). A separate, and heretofore
overlooked, research question is how best to elicit those beliefs in order to minimize measurement
error and artificiality. This line of inquiry is important because research in other fields suggests that
laypeople may have only a tenuous grasp of probability (Alberini et al., 2004; Hacking, 1990, 2006)
and may struggle to express their perceptions of likelihood in surveys (Bruine de Bruin et al., 2000;
Fischhoff & Bruine De Bruin, 1999). Unfortunately, there is little research in our field that assesses
verbal and numerical risk estimates simultaneously, nor is there work exploring whether respondents
naturally think about risk in verbal or numerical terms.
78 Criminal Justice Review 47(1)