Studying Terrorism Empirically: What We Know About What We Don’t Know

AuthorThomas Loughran,Gary LaFree,Aaron Safer-Lichtenstein
Published date01 August 2017
Date01 August 2017
DOIhttp://doi.org/10.1177/1043986217697873
Subject MatterArticles
https://doi.org/10.1177/1043986217697873
Journal of Contemporary Criminal Justice
2017, Vol. 33(3) 273 –291
© The Author(s) 2017
Reprints and permissions:
sagepub.com/journalsPermissions.nav
DOI: 10.1177/1043986217697873
journals.sagepub.com/home/ccj
Article
Studying Terrorism
Empirically: What We Know
About What We Don’t Know
Aaron Safer-Lichtenstein1, Gary LaFree1,
and Thomas Loughran1
Abstract
Although the empirical and analytical study of terrorism has grown dramatically in the
past decade and a half to incorporate more sophisticated statistical and econometric
methods, data validity is still an open, first-order question. Specifically, methods for
treating missing data often rely on strong, untestable, and often implicit assumptions
about the nature of the missing values. We draw on Manski’s idea of no-assumption
bounds to demonstrate the vulnerability of empirical results to different tactics for
treating missing cases. Using a recently available open-source database on political
extremists who radicalized in the United States, we show how point estimates of
basic conditional probabilities can vary dramatically depending on the amount of
missing data in certain variables and the methods used to address this issue. We
conclude by advocating for researchers to be transparent when building analytical
models about the assumptions they are making about the nature of the data and their
implications for the analysis and its interpretation.
Keywords
point identification, no assumption bounds, data validity, missing data, domestic
extremism, terrorism data
It is always better to have no ideas than false ones; to believe nothing, than to believe
what is wrong.
—Thomas Jefferson
1University of Maryland, College Park, MD, USA
Corresponding Author:
Aaron Safer-Lichtenstein, University of Maryland, 2163 Lefrak Hall, College Park, MD 20740, USA.
Email: asafer@umd.edu
697873CCJXXX10.1177/1043986217697873Journal of Contemporary Criminal JusticeSafer-Lichtenstein et al.
research-article2017
274 Journal of Contemporary Criminal Justice 33(3)
The empirical and analytical study of terrorism has grown dramatically in the past
decade and a half since the devastating attacks on September 11, 2001 (LaFree &
Freilich, 2017; Ranstorp, 2009; Silke, 2007, 2009; Young & Findley, 2011). The surge
in research has led to an assortment of increasingly sophisticated empirical approaches
to studying correlates of terrorist activity. For instance, a variety of studies use time-
series analysis (Apel & Hsu, 2017; Dugan & Chenoweth, 2012; S. Johnson &
Braithwaite, 2017), cost–benefit analysis (Frey, Luechinger, & Stutzer, 2007, 2009),
group-based mixture modeling (LaFree, Dugan, & Miller, 2015; Morris, 2017), hier-
archical linear modeling (B. D. Johnson, 2017), and geospatial analysis (Behlendorf,
LaFree, & Legault, 2012; LaFree, Dugan, Xie, & Singh, 2012).
Despite advances in methodological rigor, a more fundamental, and necessary, con-
sideration for the proliferation of increasingly sophisticated analytic techniques is the
validity of the data. Problems with data validity manifest in many forms, including
ambiguity from open-source data (unclassified information from print and electronic
media and other publicly available sources), unreliability of estimates, and measure-
ment error, which may affect results in non-random ways. Of these concerns, limita-
tions of open-source data have been the most directly discussed in existing terrorism
literature, with scholars noting especially how biases in media reporting practices
might prejudice data in particular directions (Dugan, 2011; Jongman, 1993; LaFree &
Dugan, 2007). This line of research parallels criminological literature that analyzes
differences between self-reported crime data and official arrest statistics, which may
be biased by missing crimes not known to the police (Hindelang, Hirschi, & Weis,
1979; Menard, 1987; Pollock, Hill, Menard, & Elliott, 2016).
However, as Freilich and LaFree (2016) noted, while scholars studying terrorism
have recognized the need for increased methodological rigor, few studies have system-
atically addressed first-order questions about issues of reliability and validity. In par-
ticular, recent quantitative research on terrorism has relied especially on open-source
data, which often includes a good deal of missing information. Although researchers
are generally forthright about their treatment of missing data, few prior studies have
specifically examined the consequences of analysis with large amounts of missing
data. In most instances, after indicating the amount of missing data (sometimes only in
a footnote), point estimates are offered without further explanation.
However, arriving at point estimates necessarily requires researchers to make
assumptions—often very strong ones—about the data being analyzed, including the
extent and consequences of missing data. For instance, many analyses use multiple
imputation techniques (e.g., Gruenewald & Pridemore, 2012; Jasko, LaFree, &
Kruglanski, 2016; Mullins & Young, 2010), which assume that uncaptured data are
missing at random (MAR; Rubin, 1976, 1987). Even stronger assumptions are required
to drop certain cases or mean-substitute values, which assumes that cases are missing
completely at random (MCAR; Rubin, 1976). Generally, to justify assumptions like
these, researchers should be thoroughly transparent about the credibility of the assump-
tions in the study’s context. If the assumptions are unfounded, the estimates may be
biased. Too often in empirical work, these assumptions are not explicitly stated and
justified.

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT