Attention by design: Using attention checks to detect inattentive respondents and improve data quality

AuthorMargaret G. Meloy,James D. Abbey
DOIhttp://doi.org/10.1016/j.jom.2017.06.001
Published date01 November 2017
Date01 November 2017
Technical note
Attention by design: Using attention checks to detect inattentive
respondents and improve data quality
James D. Abbey
a
,
*
, Margaret G. Meloy
b
a
Department of Information &Operations Management, Mays Business School, Texas A&M University, 320 Wehner Building, 4217 TAMU, College Station,
TX 77843-4217, United States
b
Calvin E. and Pamela T. Zimmerman University Endowed Fellow,Department of Marketing, Smeal College of Business, The Pennsylvania State University,
444 Business Building, University Park, PA 16802, United States
article info
Article history:
Received 3 August 2016
Received in revised form
2 November 2016
Accepted 7 June 2017
Available online 5 July 2017
Handling Editor: Mikko Ketokivi
Keywords:
Data validation
Attention checks
Manipulation checks
Response validation
abstract
This paper examines attention checks and manipulation validations to detect inattentive respondents in
primary empirical data collection. These prima facie attention checks range from the simple such as
reverse scaling rst proposed a century ago to more recent and involved methods such as evaluating
response patterns and timed responses via online data capture tools. The attention check validations also
range from easily implemented mechanisms such as automatic detection through directed queries to
highly intensive investigation of responses by the researcher. The latter has the potential to introduce
inadvertent researcher bias as the researcher's judgment may impact the interpretation of the data. The
empirical ndings of the present work reveal that construct and scale validations show consistently
signicant improvement in the t statisticsdanding of great use for researchers working predomi-
nantly with scales and constructs for their empirical models. However, based on the rudimentary
experimental models employed in the analysis, attention checks generally do not show a consistent,
systematic improvement in the signicance of test statistics for experimental manipulations. This latter
result indicates that, by their very nature, attention checks may trigger an in herent trade-off between
loss of sample subjectsdlowered power and increased Type II errordand the potential of capitalizing on
chance alonedthe possibility that the previously signicant results were in fact the result of Type I error.
The analysis also shows that the attrition rates due to attention checksdupwards of 70% in some
observed samplesdare far larger than typically assumed. Such loss rates raise the specter that studies not
validating attention may inadvertently increase their Type I error rate. The manuscript provides general
guidelines for various attention checks, discusses the psychological nuances of the methods, and high-
lights the delicate balance among incentive alignment, monetary compensation, and the subsequently
triggered mood of respondents.
©2017 Elsevier B.V. All rights reserved.
To avoid any space error or any tendency to a stereotyped
response, it seems desirable to have the different statements so
worded that about one-half of them have one end of the attitude
continuum corresponding to the left or upper part of the reaction
alternatives
These two kinds of statements ought to be distrib-
uted throughout the attitude test in a chance or haphazard
manner.eRensis Likert (1932)
1. Introduction
Generations of researchers have struggled with the stereotyped
responseda response that does not accurately represent subjects'
attitudes (Likert, 1932). The challenge for researchers is in dis-
tinguishing between a true attitude, belief, or behavioral response
versus a stereotyped response without introducing bias from the
researchers themselves. With the ready availability of online re-
sources that facilitate primary data collection, the issue of response
accuracy is particularly relevant. Toassess such data quality issues,
this paper addresses the following: how far have we come in
identifying stereotyped responses and what methods can be
effectively used to address response validity at a fundamental level
without introducing bias? The answers are not as simple as they
*Corresponding author.
E-mail addresses: jabbey@mays.tamu.edu (J.D. Abbey), mmeloy@psu.edu
(M.G. Meloy).
Contents lists available at ScienceDirect
Journal of Operations Management
journal homepage: www.elsevier.com/locate/jom
http://dx.doi.org/10.1016/j.jom.2017.06.001
0272-6963/©2017Elsevier B.V. All rights reserved.
Journal of Operations Management 53-56 (2017) 63e70

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT