Psychological Inoculation against Misinformation: Current Evidence and Future Directions

Date01 March 2022
Published date01 March 2022
Subject MatterMisinformation
136 ANNALS, AAPSS, 700, March 2022
DOI: 10.1177/00027162221087936
Evidence and
1087936ANN The Annals of the American AcademyPsychological Inoculation Against Misinformation
Much like a viral contagion, misinformation can spread
rapidly from one individual to another. Inoculation
theory offers a logical basis for developing a psycho-
logical “vaccine” against misinformation. We discuss
the origins of inoculation theory, starting with its roots
in the 1960s as a “vaccine for brainwash,” and detail the
major theoretical and practical innovations that inocu-
lation research has witnessed over the years. Specifically,
we review a series of randomized lab and field studies
that show that it is possible to preemptively “immu-
nize” people against misinformation by preexposing
them to severely weakened doses of the techniques that
underlie its production along with ways on how to spot
and refute them. We review evidence from interven-
tions that we developed with governments and social
media companies to help citizens around the world
recognize and resist unwanted attempts to influence
and mislead. We conclude with a discussion of impor-
tant open questions about the effectiveness of inocula-
tion interventions.
Keywords: inoculation theory; misinformation;
prebunking; persuasion; fake news
In 2018, the World Economic Forum (WEF)
Global Risks Report named online misinfor-
mation as one of the top global risks to the
Cecilie S. Traberg is a PhD candidate in psychology at the
University of Cambridge, where she conducts research on
social influence and misinformation susceptibility.
Jon Roozenbeek is a British Academy Postdoctoral
Fellow in the Department of Psychology at the University
of Cambridge. His research focuses broadly on misinfor-
mation, vaccine hesitancy, and extremism. He has a
particular interest in inoculation theory and the devel-
opment and testing of antimisinformation interventions.
Sander van der Linden is a professor of social psychology in
society and director of the Cambridge Social Decision-
Making Lab in the Department of Psychology at the
University of Cambridge. He has published over 100 papers
in the areas of judgment, persuasion, and decision-making,
especially in relation to (countering) misinformation.
environmental, economic, technological, and institutional systems on which our
future depends (WEF 2018). The spread of misinformation poses a serious threat
to the public’s understanding of science (Lewandowsky, Ecker, and Cook 2017;
Lewandowsky and van der Linden 2021), including climate science (Lewandowsky,
Oberauer, and Gignac 2013) and the safety of vaccines (Kata 2010). Research has
shown that false stories can spread faster, further, and deeper than true fact-
checked content (Vosoughi, Roy, and Aral 2018).1 The COVID-19 pandemic has
exacerbated the prevalence of online falsehoods, with widespread inaccurate and
misleading information about COVID-19 circulating online. This information
includes dangerous health advice suggesting that the ingestion of bleach can
“cure” COVID-19 and conspiracy theories that portray Bill Gates as a politically
motivated mastermind behind the pandemic. In fact, the proliferation of false-
hoods about the virus led the director general of the World Health Organization
(WHO) to announce, “We are not just fighting an epidemic, we’re fighting an
infodemic” (WHO 2020). Critically, belief in misinformation can have down-
stream effects on attitudes and behavior, such as undermining climate change
mitigation (Cook, Lewandowsky, and Ecker 2017; van der Linden 2015), instigat-
ing violence (Jolley and Paterson 2020), and lowering vaccination intentions and
compliance with public health guidelines (Loomba etal. 2021; Roozenbeek etal.
2020; van der Linden 2022). Accordingly, tackling critical societal issues such as
climate change and COVID-19 will require a better understanding of how to
effectively counter the spread of misinformation.
Why Fact Checking and Debunking Misinformation
Are Not Enough
Across various fields, researchers and policy-makers have sought to find ways to
reduce the spread and influence of misinformation, from legal and policy inter-
ventions to post hoc corrections such as debunking and fact checking. Policy
interventions may include public authorities directly intervening through regulat-
ing the media environment or making social media companies liable for
third-party content (Alemanno 2018). Alternatively, post hoc corrections or fact
checking interventions involve exposing news consumers to factual information
or, in some cases, a more detailed “debunking” message that puts forth strong
arguments for why previously seen information is false (Chan et al. 2017).
Evidence on the effectiveness of these interventions is mixed, but they continue
to be widely used (Walter and Murphy 2018; Nyhan etal. 2020).
One particular problem with these post hoc measures is that it is harder to
eliminate the influence of misinformation after people have been exposed to it.
Indeed, misinformation often continues to influence inferential reasoning, even
after it has been formally retracted or corrected, a phenomenon known as the
continued influence effect (Lewandowsky etal. 2012). In addition, the mere pres-
ence of misinformation in people’s news environment may undermine accurate
information, as the persuasive impact of facts can be neutralized by misinformation

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT