We Know the Yin—But Where Is the Yang? Toward a Balanced Approach on Common Source Bias in Public Administration Scholarship

Date01 June 2017
AuthorSanjay K. Pandey,Bert George
DOI10.1177/0734371X17698189
Published date01 June 2017
Subject MatterArticles
/tmp/tmp-171Z3267o0xusF/input
698189ROPXXX10.1177/0734371X17698189Review of Public Personnel AdministrationGeorge and Pandey
research-article2017
Article
Review of Public Personnel Administration
2017, Vol. 37(2) 245 –270
We Know the Yin—But
© The Author(s) 2017
Reprints and permissions:
Where Is the Yang? Toward
sagepub.com/journalsPermissions.nav
https://doi.org/10.1177/0734371X17698189
DOI: 10.1177/0734371X17698189
journals.sagepub.com/home/rop
a Balanced Approach on

Common Source Bias in
Public Administration
Scholarship
Bert George1 and Sanjay K. Pandey2
Abstract
Surveys have long been a dominant instrument for data collection in public
administration. However, it has become widely accepted in the last decade that
the usage of a self-reported instrument to measure both the independent and
dependent variables results in common source bias (CSB). In turn, CSB is argued
to inflate correlations between variables, resulting in biased findings. Subsequently,
a narrow blinkered approach on the usage of surveys as single data source has
emerged. In this article, we argue that this approach has resulted in an unbalanced
perspective on CSB. We argue that claims on CSB are exaggerated, draw upon
selective evidence, and project what should be tentative inferences as certainty
over large domains of inquiry. We also discuss the perceptual nature of some
variables and measurement validity concerns in using archival data. In conclusion,
we present a flowchart that public administration scholars can use to analyze CSB
concerns.
Keywords
common source bias, common method bias, common method variance, self-reported
surveys, public administration
1Erasmus University Rotterdam, The Netherlands
2George Washington University, Washington, DC, USA
Corresponding Author:
Bert George, Erasmus University Rotterdam, Burgemeester Oudlaan 50, Mandeville Building T17-35,
3000 DR Rotterdam, The Netherlands.
Email: george@fsw.eur.nl

246
Review of Public Personnel Administration 37(2)
Introduction
Traditionally, public administration as a research field has used surveys extensively to
measure core concepts (Lee, Benoit-Bryan, & Johnson, 2012; Pandey & Marlowe,
2015). Examples include survey items on public service motivation (PSM; for exam-
ple, Bozeman & Su, 2015; Lee & Choi, 2016), bureaucratic red tape (e.g., Feeney &
Bozeman, 2009; Pandey, Pandey, & Van Ryzin, 2016), public sector innovation (e.g.,
Audenaert, Decramer, George, Verschuere, & Van Waeyenberg, 2016; Verschuere,
Beddeleem, & Verlet, 2014), and strategic planning (e.g., George, Desmidt, & De
Moyer, 2016; Poister, Pasha, & Edwards, 2013). In doing so, public administration
scholars do not differ from psychology and management scholars who often draw on
surveys to measure perceptions, attitudes and/or intended behaviors (Podsakoff,
MacKenzie, & Podsakoff, 2012). In public administration, surveys typically consist of
a set of items that measure underlying variables, distributed to key informants such as
public managers, politicians and/or employees. Such items can be targeted at the indi-
vidual level, group level, and/or organizational level, where the latter two might
require some form of aggregation of individual responses (Enticott, Boyne, & Walker,
2009). Such surveys offer several benefits, a key one being the efficiency and effec-
tiveness in gathering data on a variety of variables simultaneously (Lee et al., 2012).
However, despite the ubiquitous nature and benefits of surveys as an instrument for
data collection in public administration, such surveys have not gone without criticism.
Over the past years, one specific point of criticism has become a central focus of pub-
lic administration journals: common source bias (CSB).
CSB, and interrelated terms such as common method bias, monomethod bias and
common method variance (CMV), indicates potential issues when scholars use the
same data source, typically a survey, to measure both independent and dependent vari-
ables simultaneously (Favero & Bullock, 2015; Jakobsen & Jensen, 2015; Podsakoff,
MacKenzie, Lee, & Podsakoff, 2003; Spector, 2006). Specifically, correlations
between such variables are believed to be inflated due to the underlying CMV and the
derived findings are thus strongly scrutinized and, often, criticized by reviewers (Pace,
2010; Spector, 2006). CMV is defined by Richardson, Simmering, and Sturman (2009,
p. 763) as a “systematic error variance shared among variables measured with and
introduced as a function of the same method and/or source.” This variance can be
considered “a confounding (or third) variable that influences both of the substantive
variables in a systematic way,” which might (but not necessarily will) result in inflated
correlations between the variables derived from the same source (Jakobsen & Jensen,
2015, p. 5). When indeed CMV results in inflated correlations (or false positives), cor-
relations are argued to suffer from CSB (Favero & Bullock, 2015).
In the fields of management and psychology, the debate on CSB includes a variety
of perspectives (Podsakoff et al., 2012). While there are several scholars who argue the
existence of and necessity to address CSB in surveys (e.g., Chang, van Witteloostuijn,
& Eden, 2010; Harrison, McLaughlin, & Coalter, 1996; Kaiser, Schultz, & Scheuthle,
2007), there are others who argue that addressing CSB might require a more nuanced
approach (e.g., Conway & Lance, 2010; Fuller, Simmering, Atinc, Atinc, & Babin,

George and Pandey
247
2016; Kammeyer-Mueller, Steel, & Rubenstein, 2010; Spector, 2006). Similarly,
editorial policies in management and psychology have ranged from, for instance, an
editorial bias at the Journal of Applied Psychology toward any paper with potential
CSB issues (Campbell, 1982) to a tolerance of these papers—as long as the necessary
validity checks are conducted—at the Journal of International Business Studies
(Chang et al., 2010) as well as the Academy of Management Journal (Colquitt &
Ireland, 2009). In public administration, however, little has been written on CSB in
general and studies that have discussed CSB typically center on CSB’s impact on sub-
jectively measured indicators of performance (e.g., Meier & O’Toole, 2013). Although
these studies offer empirical evidence that self-reports of performance suffer from
CSB and recommend avoiding self-report measures, counterarguments to the CSB
problem for self-reports other than performance measures seem to be completely
absent. As a result, an unbalanced approach on CSB has recently emerged in public
administration, where papers that draw on a survey as single data source are greeted
with a blinkered concern for potential CSB issues, reminiscent of Abe Kaplan’s pro-
verbial hammer (Kaplan, 1964).
Several illustrations indicate the existence of the proverbial CSB hammer. As part
of the editorial policy of the International Public Management Journal (IPMJ),
Kelman (2015) stipulates a “much-stricter policy for consideration of papers where
dependent and independent variables are collected from the same survey, particularly
when these data are subjective self-reports” and even “discourage[s] authors from
submitting papers that may be affected by common-method bias” (pp. 1-2). Other top
public administration journals also seem to be paying far more attention to CSB in
recent years. For instance, when comparing the number of studies that explicitly men-
tion CSB, common method bias, CMV or monomethod bias in 2010 and in 2015, we
found that in 2010, the Review of Public Personnel Administration (ROPPA) published
no such studies, Public Administration Review (PAR) published one and Journal of
Public Administration Research and Theory
(JPART) published six. Whereas, in 2015,
those numbers increased to four studies for ROPPA, six studies for PAR and 10 studies
for JPART. A recent tweet by Don Moynihan—the PMRA president at the time—at the
Public Management Research Conference 2016 in Aarhus nicely summarizes public
administration zeitgeist on CSB: “Bonfire at #pmrc2016: burning all papers with com-
mon source bias.”
Our goal in this article is to move beyond “axiomatic and knee-jerk” responses and
bring balance to consideration of CSB in public administration literature. We argue
that the rapid rise of CSB in public administration literature represents an extreme
response and thus there is a need to take pause and carefully scrutinize core claims
about CSB and its impact. Our position is supported by four key arguments. First, we
argue that the initial claims about CSB’s influence might be exaggerated. Second, we
argue that claims about CSB in public administration draw upon selective evidence,
making “broad generalizing thrusts” suspect. Third, we argue that some variables are
perceptual and can only be measured through surveys. Finally, we argue that archival
data (collected from administrative sources) can be flawed and are not necessarily a
better alternative to surveys. We conclude our article with a flowchart that public

248
Review of Public Personnel Administration 37(2)
administration scholars can use as a decision guide to appropriately and reasonably
address CSB. As such, our study contributes to the methods debate within public
administration by illustrating that the issues surrounding CSB need a more nuanced
approach. There is need to move beyond reflexive and automatic invocation of CSB as
scarlet letter and restore balance by using a more thoughtful and discriminating
approach to papers using...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT