Cognitive Biases in Performance Appraisal: Experimental Evidence on Anchoring and Halo Effects With Public Sector Managers and Employees

Published date01 September 2017
DOI10.1177/0734371X17704891
Date01 September 2017
Subject MatterArticles
https://doi.org/10.1177/0734371X17704891
Review of Public Personnel Administration
2017, Vol. 37(3) 275 –294
© The Author(s) 2017
Reprints and permissions:
sagepub.com/journalsPermissions.nav
DOI: 10.1177/0734371X17704891
journals.sagepub.com/home/rop
Article
Cognitive Biases in
Performance Appraisal:
Experimental Evidence on
Anchoring and Halo Effects
With Public Sector Managers
and Employees
Nicola Belle1, Paola Cantarelli2, and Paolo Belardinelli2
Abstract
A systematic literature review of performance appraisal in a selection of public
administration journals revealed a lack of investigations on the cognitive biases that
affect raters’ evaluation of ratees’ performance. To address this gap, we conducted
two artefactual field experiments on a sample of 600 public sector managers and
employees. Results show that anchoring and halo effects systematically biased
performance ratings. For the former, average scores were higher when subjects
were exposed to a high rather than a low anchor. For the latter, higher ability on
one performance dimension led participants to provide a higher average score on
another performance dimension. Halo effect was moderated by rater’s gender. We
conclude by discussing the study limitations and providing suggestions for future
work in this area.
Keywords
performance appraisal, anchoring effect, halo effect, systematic literature review,
artefactual field experiments
1Scuola Superiore Sant’Anna MHL Laboratory, Pisa, Italy
2Bocconi University, Milan, Italy
Corresponding Author:
Nicola Belle, Scuola Superiore Sant’Anna MHL Laboratory, Piazza Martiri della Libertà 33, Pisa 56127,
Italy.
Email: n.belle@santannapisa.it
704891ROPXXX10.1177/0734371X17704891Review of Public Personnel AdministrationBelle et al.
research-article2017
276 Review of Public Personnel Administration 37(3)
Introduction
Performance appraisal of public employees was originally introduced in 1978 in the
United States as a key provision of the Civil Service Reform Act, and it was later
included as a foundational element in the New Public Management reform waves. It
has since been adopted by governments and public sector organizations around the
world (e.g., Christensen, Dong, Painter, & Walker, 2012; Lah & Perry, 2008; Liebert,
2014; Organization for Economic Cooperation and Development [OECD], 2012). As
a result, an abundant amount of attention of public administration research has been
dedicated to illuminating our understanding of individual performance evaluation of
government workers. Scholars’ and practitioners’ work in this area is unlikely to
decline as “effectively managing performance appraisal in the public sector is increas-
ingly important given the drive toward greater accountability for results” (Battaglio,
2015, p. 207).
The first aim of our work is to contribute to this body of knowledge by identifying
the main research topics on performance appraisal that researchers in our field have
explored so far and the primary research designs that they have employed. We do so
by conducting a systematic literature review of a selection of public administration
journals included in the 2015 Institute for Scientific Information (ISI; 2015) Journal
Citation Reports (©Thomson Reuters). We find that early research (e.g., J. S. Bowman,
1999; Martin & Bartol, 1986) argued that public employees need to be trained if they
are to be competent in evaluating others’ performance, and need to be made aware of
cognitive biases characteristic of human nature if their performance scores are to be
free of systematic errors. Recently, Battaglio (2015) convincingly stated that “one of
the primary concerns of performance appraisal is the error and bias of raters. Given
that all performance appraisals are subject to human involvement, error and bias are
constant threats to effective evaluation” (p. 203). Regardless of such warnings, how-
ever, our systematic literature review did not discover any empirical study on cogni-
tive biases in performance appraisal. Unlike in our field, experimental research in
disciplines such as applied psychology (e.g., Thorsteinson, Breier, Atwell, Hamilton,
& Privette, 2008) and behavioral economics (e.g., Furnham & Boo, 2011; Kahneman,
2011) have long suggested that cognitive biases may systematically affect raters’ eval-
uation of ratees’ performance.
In particular, anchoring and halo effects consistently have been shown to affect the
performance scores that raters assign to ratees. “Anchoring is a pervasive and robust
effect in human decisions regardless of factors such as types of anchors, relevance of
anchor cues, expertise, motivation and cognitive load” (Furnham & Boo, 2011, p. 41).
Similarly, halo errors are often included among the most common rating errors (e.g.,
Balzer & Sulsky, 1992; Battaglio, 2015; Bechger, Maris, & Hsiao, 2010; J. S. Bowman,
1999; Martin & Bartol, 1986). The second contribution of our work lies in providing
much needed novel experimental evidence of the anchoring and halo effects in perfor-
mance appraisal using real public sector managers and employees as raters. In so doing,
we follow up on recent calls in our field to encourage cross-fertilization among disci-
plines (e.g., Grimmelikhuijsen, Jilke, Olsen, & Tummers, 2017). Also, we further

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT