Restructured frame‐of‐reference training improves rating accuracy

Date01 July 2019
Published date01 July 2019
AuthorSerena Wee,Brandon Koh,Ming‐Hong Tsai
DOIhttp://doi.org/10.1002/job.2368
RESEARCH ARTICLE
Restructured frameofreference training improves rating
accuracy
MingHong Tsai
1
|Serena Wee
2
|Brandon Koh
1
1
School of Social Sciences, Singapore
Management University, Singapore
2
School of Psychological Science, The
University of Western Australia, Perth,
Western Australia, Australia
Correspondence
MingHong Tsai, School of Social Sciences,
Singapore Management University, 90
Stamford Road, Level 4, Singapore 178903.
Email: mhtsai@smu.edu.sg
Summary
The use of heuristic judgments is prevalent in organizations and negatively impacts
accurate employee assessments. To minimize the negative impact of heuristic judg-
ments (i.e., anchoring and adjustment), we aim to improve rating accuracy by
restructuring frameofreference (FOR) training. We conducted five studies
(N= 1,143) using different samples (three including participants with hiring experi-
ence), training environments (onsite and online), and rating contexts (evaluations of
sales representatives, teachers, contract negotiation specialists, and retail store man-
agers). Across the five studies, the average improvement in rating accuracy was at
least twice as large for restructured FOR (vs. control) training as it was for typical
FOR (vs. control) training; the difference in rating accuracy between restructured
and typical FOR training was statistically significant. Furthermore, minimizing the
anchoring effect rather than increasing opportunities for rating adjustments improved
rating accuracy (Study 4). Finally, restructured FOR training achieved higher criterion
validity (i.e., a higher strength of the association between ratings regarding a target
and the target's objective performance) than did typical FOR training (Studies 3 and
5). We discuss implications for improving the effectiveness of diverse training pro-
grams and the accuracy of judgments in organizations.
KEYWORDS
anchoring and adjustment heuristic, frameofreference, judgment, rating accuracy, subjective
evaluation
1|INTRODUCTION
Assessments are an essential process for organizations to evaluate
performance, motivate employees, provide feedback, identify training
needs and growth, and distribute rewards fairly. Therefore, under-
standing and refining rater training methods to enhance assessment
accuracy will be a highly valuable endeavor. Organizations benefit
from accurate employee assessments, which are associated with a
wide range of positive consequences, such as superior job perfor-
mance (Abbas, 2014), enhanced perceptions of procedural and infor-
mational justice (Roberson & Stewart, 2006), increased appraisal
satisfaction, and elevated motivation to improve future job perfor-
mance (Selvarajan & Cloninger, 2012). Despite these benefits, raters
often do not provide accurate ratings due to their use of heuristic
based judgments during the evaluation process (Reb, Greguras, Luan,
& Daniels, 2014). Rater training aims to mitigate the use of these heu-
ristic judgments, therefore improving rating accuracy (Uggerslev &
Sulsky, 2008). In addition, rater training programs help participants
to adopt organizational goals, develop skills related to feedback deliv-
ery, and increase confidence in performing assessments (Kumar, 2005;
Nesbit & Wood, 2002).
In particular, frameofreference (FOR) training (Bernardin & Buck-
ley, 1981) is an effective and frequently used rater training approach.
This approach uses a practicethenfeedback procedure to instill
established standards for evaluation (Lievens & Sanchez, 2007; Roch,
Woehr, Mishra, & Kieszczynska, 2012; Woehr & Huffcutt, 1994).
Received: 3 October 2018 Revised: 31 March 2019 Accepted: 5 April 2019
DOI: 10.1002/job.2368
740 © 2019 John Wiley & Sons, Ltd. J Organ Behav. 2019;40:740757.wileyonlinelibrary.com/journal/job
Nonetheless, the practicethenfeedback procedure in typical FOR
training couldcounterintuitivelymake it more difficult for a rater to
provide accurate ratings. As we argue below, the procedure increases
a rater's tendency to rely on the anchoring and adjustment heuristic
(Tversky & Kahneman, 1974). That is, practicethenfeedback could
potentially result in two crucial shortcomings of the typical FOR train-
ing method: an initial anchoring effect and subsequent insufficient
adjustments. To address these limitations, we restructure the proce-
dures of FOR training, by presenting evaluation standards before prac-
tice rating trials and by offering opportunities for sufficient rating
adjustments, and investigate whether or not this restructured FOR
method improves training effectiveness.
Overall, we attempt to make two contributions to the literature.
First, we investigate how susceptible rater trainees are to the anchor-
ing and adjustment heuristic and how rater training procedures may
affect this heuristic. For example, it is important to examine the impact
of the heuristic on rating accuracy because research in the domain of
marketing and consumer behavior has shown thatdue to a reliance
on the anchoring and adjustment heuristicpeople tend to focus on
their initial evaluation, even to the point that they ignore other infor-
mation that could have facilitated more accurate evaluations (Naylor,
Lamberton, & Norton, 2011). It, therefore, seems plausible that a sim-
ilar situation might occur when raters use their first evaluation in prac-
tice trials as an anchor to perform subsequent assessments. In
addition, when people employ the anchoring and adjustment heuristic,
they are not motivated to make large adjustments during subsequent
evaluations (Epley & Gilovich, 2006). This phenomenon of insufficient
adjustment could affect the accuracy of ratings such that raters may
not sufficiently adjust their ratings based on feedback from a training
advisor. Thus, we examined whether a rater's behavior could be sys-
tematically influenced by training procedures that are designed to mit-
igate an initial anchoring effect and to resolve the issue of insufficient
adjustments. Specifically, we investigated the impacts of restructured
information presentation and opportunities for rating adjustment on
rating accuracy.
Furthermore, we investigate these impacts both simultaneously
and independently. This approach differs from existing research on
the anchoring and adjustment heuristic, which examines the accessi-
bility of an anchor (e.g., Naylor et al., 2011) and insufficient adjustment
(e.g., Epley, Keysar, Van Boven, & Gilovich, 2004) as separate reasons
for why this effect would occur. A simultaneous evaluation of both
factors can examine whether these two factors are equally or differen-
tially associated with rating accuracy. Importantly, this approach
allows us to investigate whether these factors will amplify or weaken
each other's effect on rating accuracy (i.e., the interaction effects of
the two factors on rating accuracy). Thus, our approach offers a more
integrated examination of the factors underlying the anchoring and
adjustment heuristic than does previous research.
By examining the assumptions outlined in our first contribution,
our second contribution is an unabashedly practical one: to develop
a novel training intervention that yields higher rating accuracy than
typical FOR training. Although an enhancement of training effective-
ness on rating accuracy is a major goal in rater training research,
research in the most recent 5 years has focused on applying the FOR
training method (e.g., Firth, Hollenbeck, Miles, Ilgen, & Barnes, 2015)
rather than on improving training effectiveness. To continue with
the pursuit of enhanced training effectiveness, we explore an unexam-
ined intervention that could potentially add new training principles
and procedures to the field of rater training. Although practicethen
feedback procedures in a typical FOR training process have been con-
sidered as sufficiently effective in the previous research (Roch et al.,
2012), we investigate whether restructured FOR training procedures
can further improve rating accuracy by minimizing the anchoring and
adjustment heuristic. The anchoring and adjustment heuristic has also
been demonstrated in a wide range of organizationally relevant con-
texts, such as negotiation (Gunia, Swaab, Sivanathan, & Galinsky,
2013), selection interviews (Kataoka, Latham, & Whyte, 1997), and
team decisionmaking (Lehner, SeyedSolorforough, O'Connor, Sak, &
Mullin, 1997), where it has been shown to result in suboptimal evalu-
ation outcomes. Despite its relevance and prevalence, relatively little
is known about its effects on rater training effectiveness. Given that
the restructured FOR training procedures are designed to mitigate
an overreliance on the anchoring and adjustment heuristic, significant
improvement in rating accuracy from restructured training procedures
can implicate the insufficient effectiveness of the practicethen
feedback procedures due to the anchoring and adjustment heuristic.
Therefore, our research not only includes a practical and easily
implementable solution to rater training but also clarifies how the
anchoring and adjustment heuristic affects rater training effectiveness
in practicethenfeedback procedures.
To provide background information about our research question,
we elaborate on existing FOR training and restructured FOR training
and examine how different types of rater training influence rating
accuracy and criterion validity in the subsequent sections. Afterward,
we explore our research question using five studies with different rat-
ing scenarios, performance dimensions, and samples to increase the
generalizability of our results. Finally, we discuss the theoretical and
practical implications of our findings to present how our research
can contribute to organizational management.
2|FOR TRAINING
The overall premise of FOR training is that individuals have idiosyn-
cratic knowledge (i.e., a personal schema or implicit theory), which dif-
fers from the more widely held, and often explicitly stated,
institutional knowledge (i.e., the referent schema or espoused theory;
Bernardin & Buckley, 1981). Furthermore, the schemabased theory
suggests that FOR trainees can replace their personal schema of job
performance with the referent schema provided by the organization
and therefore improve their rating accuracy (Cardy & Keefe, 1994;
Lievens & Sanchez, 2007; Sulsky & Day, 1994).
A schema composes of closely interrelated sets of knowledge
about a concept (Marshall, 1995; Piaget, 1997). For example, a per-
son's schema regarding an effective classroom instructor may include
beliefs about being detailed, organized, and enthusiastic about course
TSAI ET AL.741

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT