Comparing the validity of net promoter and benchmark scoring to other commonly used employee engagement metrics

Published date01 December 2020
AuthorMatt I. Brown
Date01 December 2020
DOIhttp://doi.org/10.1002/hrdq.21392
QUANTITATIVE STUDY
Comparing the validity of net promoter and
benchmark scoring to other commonly used
employee engagement metrics
Matt I. Brown
Geisinger Health System, Lewisburg, Pennsylvania
Correspondence
Matt I. Brown, Geisinger Health System,
120 Hamm Drive, Suite 2A, MC 60-36,
Lewisburg, PA 17837
Email: mibrown9015@gmail.com
Abstract
Organizational survey data are an important part of work-
force analytics for HRD researchers and practitioners.
Despite the wealth of research on the business impact of
employee attitudes, such as job satisfaction or engagement,
there has been relatively little research to help HRD practi-
tioners identify the optimal methods for scoring and
reporting survey results. Inspired by customer satisfaction
research, this study empirically examines how the use of dif-
ferent metrics for scoring survey responses affects the dis-
tributions of work unit engagement scores and relationships
with work outcomes. Survey data were gathered from
1,242 work units in a healthcare organization. Results indi-
cate that the choice of scoring method can meaningfully
affect the distribution and validity of scores. Metrics based
on extremely positive responses (top box and net promoter)
were positively skewed and yielded the strongest correla-
tions with performance. Benchmark scores were also
positively skewed but did not provide any added criterion-
related validity. In contrast, mean and percent favorable
scores were negatively skewed, provided less variability in
scores between groups, but equally predicted unit turnover.
These results indicate that using top box or net promoter
scoring may yield a more normal distribution of group scores
while also providing equal to greater predictive validity for
DOI: 10.1002/hrdq.21392
© 2020 Wiley Periodicals LLC.
Human Resource Development Quarterly. 2020;31:355370. wileyonlinelibrary.com/journal/hrdq 355
performance. However, these results were only observed
within a single organization. Further research is needed to
determine whether these results can be replicated within
different organizational contexts or when using different cri-
terion measures including objective performance measures
or patient and customer satisfaction.
KEYWORDS
employee engagement, measurement/metrics, motivation,
research-practice gap, talent management
1|INTRODUCTION
Organizational surveys are a longstanding method of measuring employee attitudes and opinions and gaining action-
able feedback in human resource development (Jacoby, 1988; Van Rooy, Whitman, Hart, & Caleo, 2011). Despite
the instrumental role of surveys in HRD initiatives (Church, 2017) or in workforce analytics more broadly (Kaur &
Fink, 2017), there has been little research to help HRD practitioners identify the most useful methods for communi-
cating survey results (Rogelberg, Church, Waclawski, & Stanton, 2002). Without a foundation of empirical research,
most HRD practitioner-focused articles or book chapters focus only on perceived ease of interpretation or under-
standing among managers when communicating survey results (e.g., Johnson, 2006). Although case study accounts
are often used to describe the practices of specific organizations or consulting firms (e.g., Davenport, Harris, &
Shapiro, 2010), these anecdotes can sometimes be misleading. For example, net promoter scores have become one
of the most widely used customer satisfaction metrics due in part to best-selling management guides
(e.g., Reichheld & Markey, 2011) despite mixed research evidence in the academic literature for its superiority to
other methods (Morgan & Rego, 2006). Moreover, HRD researchers and practitioners may not fully understand how
their choice of scoring method may affect the underlying psychometric properties of the survey data or the rank-
order of work unit or group scores.
Presently, most HRD practitioners report the results of organizational surveys by using mean scores or the pro-
portion of favorable responses (Rogelberg et al., 2002). Mean scores are calculated by assigning numerical values to
each response option and then determining the average value across all responses. The mean is simple to compute
and is the basis for many inferential statistical tests that assume normally distributed data. For these reasons, mean
scoring is often the recommended method for scoring survey responses (Derickson, Yanchus, Bashore, & Osatuke,
2019; Robinson, 2018). Likewise, favorable (or unfavorable) scores are determined by calculating the proportion of
all degrees of favorable (or unfavorable) responses compared to the total number of responses. For example,
responses of somewhat agreeand strongly agreescored as favorable instead of receiving different numerical
scores. This approach is considered to be easier to communicate than mean scores (Jones & Bearley, 1995) and eas-
ier to interpret for managers who may lack formal training in statistics (Johnson, 2006). As such, percent favorable
scores are frequently used in practitioner reports of organizational survey results (e.g., Gallup, 2017).
Even though advances in quantitative research have informed new ways of interpreting survey results, such as
the development of key driver analyses (e.g., Johnson, 2017; Lundby & Johnson, 2006), there has been relatively lit-
tle research to compare the psychometric properties and criterion-related validity of methods for scoring survey
responses. To this end, the present study compares the validity of several, commonly used scoring methods. This
study is based on work in marketing research which has examined various ways to quantify customer satisfaction in
order to identify the optimal predictors of consumer behavior (de Haan, Verhoef, & Wiesel, 2015). In particular, the
356 BROWN

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT