Editor's Notes

AuthorMark A. Hager
DOIhttp://doi.org/10.1002/nml.21380
Published date01 June 2019
Date01 June 2019
EDITOR'S NOTES
A few months back, a colleague and I submitted a manuscript to a management journal. It employed
the same data I had analyzed for a 2014 paper published in Nonprofit and Voluntary Sector Quar-
terly. However, this new paper did not even get through the front door of the business journal. The
editor wrote a thoughtful desk-rejection. My decision is based on my experience that the reviewers
will not respond positively to your manuscript, based on the fact that the data on which you perform
your analyses were collected using a single-survey, self-report design, such that all variables of inter-
est (independent, moderators, and dependent) were collected from the same subjects at a single point
in time,she wrote. Unfortunately, reviewers are highly sensitized to the issues
associated with cross-sectional designs and common method variance, and routinely reject studies
based on this method.This was not the first indication I have had that common method bias is
front-of-mind for reviewers and editors in some fields, but it was a stark example of it.
Reviewers for Nonprofit Management & Leadership occasionally raise the issue of common
method bias, but my sense is that our reviewers do not emphasize this issue as much as reviewers do
in some other fields. This is both good and bad. The good is that reviewers and editors at some
journals or in some fields misunderstand or over-simplify the problem, and thereby reject manu-
scripts out of hand when key variables are collected from a common source. The interdisciplinary
field of nonprofit and philanthropic studies seems not to have fallen into this trap. The bad, however,
is that common method bias is a potentially serious issue that does not always get the attention from
authors and reviewers that it should. In this editorial, I want to offer a brief primer on the issu e. A
real treatment of the method bias requires a full article or chapter, so this editorial serves merely as a
heads-up for those people who might be painting the issue with too broad a brush, or not at all.
COMMON METHOD BIAS
What is it? Method bias describes errors in variable measurement that stem from how we collect that
measurement. Ideally, our measurement perfectly captures the information or construct we identify.
However, environments and the internal states of respondents introduce noise that can threaten the
validity of the measurement. If we ask a respondent if she has eaten solid food today, the response is
likely to be valid so long as we have a common understanding of solid food,a common interest in
truth, and no problem with recall. More interesting questions are more difficult, however. If we ask a
respondent if she is happy, the measurement is valid only when the respondent the researcher, and
other respondents share a notion of happy;the response options adequately capture the array of
happiness (assuming happiness is best captured as an array); and the respondent is not inclined to
only offer responses that would please the researcher. This is a tall order. So, our methods are replete
with measurement problems. This editorial review of method bias, and especially common method
bias, is very heavily informed by an excellent article published in 2012 by Podsakoff and colleagues;
the full reference is on page 472, below.
DOI: 10.1002/nml.21380
Nonprofit Management and Leadership. 2019;29:469472. wileyonlinelibrary.com/journal/nml © 2019 Wiley Periodicals, Inc. 469

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT