Is Theory Useful for Conflict Prediction? A Response to Beger, Morgan, and Ward

AuthorNicholas Sambanis,Robert A. Blair
Published date01 August 2021
DOI10.1177/00220027211026748
Date01 August 2021
Subject MatterAuthor Exchange
Author Exchange
Is Theory Useful for
Conflict Prediction?
A Response to Beger,
Morgan, and Ward
Robert A. Blair
1
, and Nicholas Sambanis
2
Abstract
Beger, Morgan, and Ward (BM&W) call into question the results of our article on
forecasting civil wars. They claim that our theoretically-informed model of conflict
escalation under-performs more mechanical, inductive alternatives. This claim is
false. BM&W’s critiques are misguided or inconsequential, and their conclusions
hinge on a minor technical question regarding receiver operating characteristic
(ROC) curves: should the curves be smoothed, or should empirical curves be used?
BM&W assert that empirical curves should be used and all of their conclusions
depend on this subjective modeling choice. We extend our original analysis to show
that our theoretically-informed model performs as well as or better than more
atheoretical alternatives across a range of performance metrics and robustness
specifications. As in our original article, we conclude by encouraging conflict fore-
casters to treat the value added of theory not as an assumption, but rather as a
hypothesis to test.
Keywords
civil wars, forecasting, big data, machine learning
1
Department of Political Science and Watson Institute for International and Public Affairs, Brown
University, Providence, RI, USA
2
Department of Political Science, University of Pennsylvania, Philadelphia, PA, USA
Corresponding Author:
Robert A. Blair, Department of Political Science and Watson Institute for International and Public Affairs,
Brown University, 111 Thayer St., Providence, RI 02912, USA.
Email: robert_blair@brown.edu
Journal of Conflict Resolution
2021, Vol. 65(7-8) 1427-1453
ªThe Author(s) 2021
Article reuse guidelines:
sagepub.com/journals-permissions
DOI: 10.1177/00220027211026748
journals.sagepub.com/home/jcr
In our article “Forecasting Civil Wars” (Blair and Sambanis 2020, hereafter B&S),
we sought to make three contributions to the literature on conflict forecasting. First,
we explored whether incorporating theoretical insights into predictive models
improves forecasts of civil war. We did this by building a model grounded in
“procedural” theories of conflict escalation from the social movements literature,
then comparing the predictive performance of this escalation model to the perfor-
mance of more mechanical, inductive alternatives. Second, we considered whether
incorporating “structural” characteristics of countries, such as regime type or per
capita income, might improve the predictive performance of our escalation model,
which was otherwise based on pro cedural variables alone. Finall y, we used the
escalation model to generate “true” prospective forecasts for the first half of
2016, which we preregistered with the Evidence in Governance and Politics (EGAP)
network.
1
We returned to these forecasts to evaluate their accuracy with the benefit
of hindsight. We found that the theoretically-informed escalation model generally
outperformed more atheoretical alternatives (though the margins were often small);
that adding structural characteristi cs to the model did not significantly improve
predictive performance, especially over narrow forecasting windows; and that our
pre-registered predictions were generally fairly accurate.
Beger, Morgan, and Ward (2021, hereafter BM&W) replicate and modify our
analysis and challenge our conclusions. Their article can be read as a defense of
machine learning in conflict forecasting, and the crux of their argument is that the
theoretically-grounded escalation model under-performs more mechanical, induc-
tive alternatives. We certainly agree that machine learning can improve predictive
performance, which is precisely why we applied machine learning methods in our
original article. What is puzzling is that, while BM&W’s paper is entirely dedicated
to proving the purported superiority of mor e atheoretical approaches to conflict
forecasting, they end up supporting our own conclusion when they “concede that
predicting on the basis of a strong theoretical model is preferable to inductive
prediction” (p. 19). We are pleased that they a gree with our intuition, but their
conclusion is not supported by their analysis. It is, however, supported by ours.
BM&W raise six empirical critiques of our study. We consider these in detail and
show that five of them are either misguided or largely inconsequential: even if we
had embraced all of them before publishing our original article, they would not have
changed our substantive conclusions. They are also irrelevant to the question of
whether theory is useful for prediction, as they have no bearing whatsoever on our
comparison of more and less theoretically-informed models. In any event, we do not
accept all of BM&W’s critiques: some are simply false, and others are based on
questionable premises or obvious mischaracterizations of our arguments. We do
admit to making two minor coding errors, which BM&W identify, but correcting
these errors does not change our substantive conclusions.
The only remaining point of difference—and the only one that has any conse-
quences at all for our comparison of the escalat ion model to more mechanical,
inductive alternatives—is also relatively minor: it hinges on whether “empirical”
1428 Journal of Conflict Resolution 65(7-8)

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT