Assessment Practices and Expert Judgment Methods in Forensic Psychology and Psychiatry

Published date01 December 2014
DOI10.1177/0093854814548449
AuthorTess M.S. Neal,Thomas Grisso
Date01 December 2014
/tmp/tmp-17uTwIZuZ3xWlh/input Assessment PrActices And exPert
Judgment methods in Forensic
Psychology And PsychiAtry

An international snapshot
TESS M.S. NEAL
University of Nebraska Public Policy Center
THOMAS GRISSO
University of Massachusetts Medical School
We conducted an international survey in which forensic examiners who were members of professional associations described
their two most recent forensic evaluations (N = 434 experts, 868 cases), focusing on the use of structured assessment tools to
aid expert judgment. This study describes (a) the relative frequency of various forensic referrals, (b) what tools are used
globally, (c) frequency and type of structured tools used, and (d) practitioners’ rationales for using/not using tools. We provide
general descriptive information for various referrals. We found most evaluations used tools (74.2%) and used several (four,
on average). We noted the extreme variety in tools used (286 different tools). We discuss the implications of these findings
and provide suggestions for improving the reliability and validity of forensic expert judgment methods. We conclude with a
call for an assessment approach that seeks structured decision methods to advance greater efficiency in the use and integration
of case-relevant information.
Keywords: judgment; decision; forensic; structure; actuarial
Forensic psychologists and psychiatrists are expected to be experts in their subject areas
and to make good use of the cumulative knowledge developed in their fields over time.
How might experts use the body of knowledge in their fields to minimize decision errors?
Systematic approaches have been developed to help experts harness field-based knowledge
to remember everything one needs to know or do for a given task. The field of forensic
mental health assessment has developed many structured assessment tools to aid forensic
Author’s note: Portions of these results were presented at the 2014 annual conference of the American
Psychology–Law Society (AP-LS) in New Orleans, Louisiana. The first author was supported in part by an NSF
Interdisciplinary Postdoctoral Fellowship (SES1228559) during the writing of this manuscript. Any opinions,
findings, conclusions, or recommendations expressed in this article are those of the authors and do not neces-
sarily reflect those of NSF. Correspondence concerning this article should be addressed to Tess M. S. Neal,
University of Nebraska Public Policy Center, 215 Centennial Mall South, Suite 401 (P.O. Box 880228), Lincoln,
NE 68588; e-mail: tneal2@nebraska.edu.

CRIMINAL JUSTICE AND BEHAVIOR, 2014, Vol. 41, No. 12, December 2014, 1406 –1421.
DOI: 10.1177/0093854814548449
© 2014 International Association for Correctional and Forensic Psychology
1406

Neal, Grisso / An International Snapshot 1407
clinicians in making decisions related to forensic referral questions. Many of these tools are
actuarial (i.e., mechanical, formula-based), whereas others are checklist-based methods fre-
quently referred to as Structured Professional Judgment (SPJ) tools. In the SPJ approach,
the expert is presented with evidence-based factors to consider with specific guidelines
(Guy, Packer, & Warnken, 2012). This approach does not rely on fixed decision rules as
there is no algorithm to combine the data to arrive at a decision, so this approach operates
somewhere between actuarial and unaided clinical judgment methods (Douglas, Ogloff,
Nicholls, & Grant, 1999).
The development of structured tools in the forensic mental health field has not been with-
out controversy. Some argue that an unstructured intuitive approach can lead to better deci-
sions at times, or that clinical judgment is more flexible and can take into account novel or
powerful information that might not be included in existing formulas or checklists (e.g.,
Litwack, 2001; Montgomery, 2005; Skeem et al., 2005). However, the weight of evidence
indicates that the structured approaches perform better than unaided clinical judgment when
sound tools are available to assist decision tasks (e.g., Ægisdóttir et al., 2006; Dawes, Faust,
& Meehl, 1989; Dolan & Doyle, 2000; Faust & Ziskin, 1988; Grove, Zald, Lebow, Snitz, &
Nelson, 2000; Guy, 2008; Haynes et al., 2009).
the current study
Despite the development of many structured tools to assist professional judgment in the
past few decades, little is known about the degree to which these tools have become stan-
dard practice in the forensic mental health field. Little information is available about the
conditions under which they are used and with what perceived strengths and weaknesses.
Our study explored forensic mental health professionals’ self-reported use of structured
tools in their forensic evaluations in civil and criminal contexts. We also wanted to know
when forensic mental health professionals see the use of these tools as more or less
justified.
Previous surveys of forensic mental health professionals have typically asked what clini-
cal diagnostic tools are used in various kinds of forensic evaluations (such as multi-scale
symptom inventories, clinical scales, cognitive and achievement tests, unstructured person-
ality tests, and neuropsychological tests: Archer, Buffington-Vollum, Stredny, & Handel,
2006; Boccaccini & Brodsky, 1999; Keilin & Bloom, 1986; Lees-Haley, Smith, Williams,
& Dunn, 1996; McLaughlin & Kan, 2014). Typically, they have asked respondents to
express how frequently they use such tools in their forensic evaluations (e.g., never, some-
times, almost always; or percentage of time). In contrast, in this study we asked forensic
clinicians to describe their use of tools in their two most recent forensic cases. Our intent
was to obtain an estimate based on “sampling” of cases rather than relying on respondents
to characterize the frequency of their use of tools. Moreover, this method allowed us to
sample from the full range of forensic evaluations that forensic clinicians perform, whereas
previous surveys typically asked about tools used in one or two particular kinds of forensic
evaluations (and usually, by American psychologists).
None of the earlier studies inquired about the practicalities of using these instruments or
the reasons that clinicians might not use them. It appears that only one study to date has
examined the practicalities of routinely using structured tools in forensic assessments.
Focusing on competence (fitness) to stand trial (CST) evaluations, Pinals, Tillbrook, and

1408 CRIMINAL JUSTICE AND BEHAVIOR
Mumley’s (2006) qualitative study suggested that there may be several reasons why struc-
tured tools might not be adopted in routine practice by forensic evaluators. The present
study sought to address the potential gap between research and practice by exploring the
degree to which forensic evaluators use tools to aid their clinical judgment as well as explor-
ing reasons why they might not.
method
Procedure And mAteriAls
After obtaining institutional review board approval, we designed our survey online using
REDCap software.1 Professionals (described below) received an email inviting them to
participate in the survey and were sent a reminder invitation after 2 weeks. In the survey, we
asked participants to answer questions about the two most recent forensic evaluations they
had completed. We defined a forensic mental health evaluation as:
a psychological or psychiatric assessment of a person involved in a legal proceeding, conducted
by the mental health professional in service to the legal system. Some examples include
evaluations of civil and criminal competencies, criminal responsibility, mental disability, child
custody and protection, violence and sexual offending risk assessments, and psychic injury,
among others.
We requested that participants retrieve their reports (i.e., pull the hard-copy from their
file cabinet or open an electronic version of the report) and refer to them as they answered
the survey questions. We estimated that the survey required about 15 min.
Our questions inquired about the referral question, sources of information used, whether
or not any standardized tools were used (which we defined as “any tests, instruments,
checklists, or rating systems”), what tools were used if applicable, reasons tools were used
(or not), length of the report (in pages), how long the evaluation took from the time of refer-
ral until completion (in days), and demographic questions about the evaluator. Responses
were provided in menus when possible, usually with an “other” category that allowed for
typed responses.
PArticiPAnts
Psychologist and psychiatrist members of professional forensic mental health associa-
tions in the United States,2 Canada,3 Australia and New Zealand,4 and Europe5 were invited
to complete the online survey. There were 434 respondents, reporting on 868 cases.6 Most
of the sample comprised doctoral-level (91%) and master’s-level clinicians (7.4%).
Regarding profession, more psychologists (51%) than psychiatrists (6%) responded.7 This
was an experienced sample, with an average of 16.56 years (SD = 12.01 years) of forensic
evaluation experience. Overall, 16.4% of the sample was board-certified. Certifying boards
included the American Board of Forensic Psychology (6.7%) and other specialties of the
American Board of Professional Psychology (4.8%), the Royal College of Physicians and
Surgeons (2.8%), and other boards (e.g., American Board of Psychiatry and Neurology,
American Board of Sleep Medicine). Most of the participants...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT