Does Academic Research Destroy Stock Return Predictability?

AuthorR. DAVID MCLEAN,JEFFREY PONTIFF
DOIhttp://doi.org/10.1111/jofi.12365
Published date01 February 2016
Date01 February 2016
THE JOURNAL OF FINANCE VOL. LXXI, NO. 1 FEBRUARY 2016
Does Academic Research Destroy Stock Return
Predictability?
R. DAVID MCLEAN and JEFFREY PONTIFF
ABSTRACT
We study the out-of-sample and post-publication return predictability of 97 vari-
ables shown to predict cross-sectional stock returns. Portfolio returns are 26% lower
out-of-sample and 58% lower post-publication. The out-of-sample decline is an up-
per bound estimate of data mining effects. We estimate a 32% (58%–26%) lower
return from publication-informed trading. Post-publication declines are greater for
predictors with higher in-sample returns, and returns are higher for portfolios con-
centrated in stocks with high idiosyncratic risk and low liquidity. Predictor portfo-
lios exhibit post-publication increases in correlations with other published-predictor
portfolios. Our findings suggest that investors learn about mispricing from academic
publications.
FINANCE RESEARCH HAS UNCOVERED many cross-sectional relations between pre-
determined variables and future stock returns. Beyond their historical insights,
these relations are relevant to the extent that they provide insights into the
future. Whether the typical relation continues outside a study’s original sample
is an open question, the answer to which can shed light on why cross-sectional
return predictability is observed in the first place.1Although several papers
R. David McLean is at DePaul University and Jeffrey Pontiff is at Boston College and is an
unpaid director at a non-profit called the Financial Research Association. We are grateful to the Q
Group, the Dauphine-Amundi Chair in Asset Management, and SSHRC for financial support. We
thank participants at the Financial Research Association’s 2011 early ideas session, Auburn Uni-
versity,Babson College, Bocconi University, Boston College, Brandeis University,CKGSB, Georgia
State University, HBS, HEC Montreal, MIT, Northeastern University, Simon Fraser, University
of Georgia, University of Maryland, University of South Carolina, University of Toronto, Univer-
sity of Wisconsin, Asian Bureau of Finance and Economic Research Conference, City University
of Hong Kong International Conference, Finance Down Under Conference 2012, Wilfred Lau-
rier, University of WashingtonSummer Conference, European Finance Association (Copenhagen),
1st Luxembourg Asset Management Conference, Ivey Business School, and Pontificia Universi-
dad Catholica de Chile and Pierluigi Balduzzi, Turan Bali, Brad Barber, Mark Bradshaw, David
Chapman, Shane Corwin, Alex Edmans, Lian Fen, Wayne Ferson, Francesco Franzoni, Xiaohui
Gao, Thomas Gilbert, Robin Greenwood, Bruce Grundy, Cam Harvey, Clifford Holderness, Darren
Kisgen, Owen Lamont, Borja Larrain, Juhani Linnainmaa, Jay Ritter,Ronnie Sadka, Paul Schultz,
Andrei Shleifer, Ken Singleton, Bruno Skolnik, Jeremy Stein, Noah Stoffman, Matti Suominen,
Allan Timmermann, Michela Verado, Artie Woodgate, Jianfeng Yu, William Ziemba, three anony-
mous referees, and an anonymous Associate Editor for helpful comments.
1Similar to Mittoo and Thompson’s(1990) study of the size effect, we use a broad set of predictors
to focus on out-of-sample cross-sectional predictability. For an analysis of the performance of
DOI: 10.1111/jofi.12365
5
6The Journal of Finance R
note whether a specific cross-sectional relation continues out-of-sample, no
study compares in-sample returns, post-sample returns, and post-publication
returns for a large sample of predictors. Moreover, previous studies produce
contradictory messages. As examples, Jegadeesh and Titman (2001) show that
the relative returns to high momentum stocks increased after the publication of
their 1993 paper, whereas Schwert (2003) argues that, since the publication of
the value and size effects, index funds based on these variables fail to generate
alpha.2
In this paper, we synthesize information for 97 characteristics shown to pre-
dict cross-sectional stock returns in peer-reviewed finance, accounting, and
economics journals. Our goal is to better understand what happens to return
predictability outside a study’s sample period. We compare each predictor’s
returns over three distinct periods: (i) the original study’s sample period,
(ii) the period after the original sample but before publication, and (iii) the
post-publication period. Previous studies attribute cross-sectional return pre-
dictability to statistical biases, rational pricing, and mispricing. By comparing
the return predictability of the three periods, we can better differentiate be-
tween these explanations.
A. Statistical Bias
If return predictability in published studies results solely from statistical
biases, then predictability should disappear out of sample. We use the term
“statistical biases” to describe a broad array of biases inherent to research.
Fama (1991, p. 1585) addresses this issue when he notes that “With many clever
researchers on both sides of the efficiency fence, rummaging for forecasting
variables, we are sure to find instances of ‘reliable’ return predictability that
are in fact spurious.” To the extent that the results of the studies in our sample
are driven by such biases, we should observe a decline in return predictability
out-of-sample.
B. Rational Expectations Versus Mispricing
Differences between in-sample and post-publication returns can be deter-
mined by both statistical biases and the extent to which investors learn from
out-of-sample time-series predictability, see LeBaron (2000) and Goyal and Welch (2008). For an
analysis of cross-sectional predictability using international data, see Fama and French (1998),
Rouwenhorst (1998), and McLean, Pontiff, and Watanabe (2009). For an analysis of calendar
effects, see Sullivan, Timmermann, and White (2001).
2Lewellen (2014) uses 15 variables to produce a singular rolling cross-sectional return proxy
and shows that it predicts, with decay, next period’s cross section of returns. Haugen and Baker
(1996) and Chordia, Subrahmanyan, and Tong (2013) compare characteristics in two separate
subperiods. Haugen and Baker show that each of their characteristics produces statistically sig-
nificant returns in their second subperiod, whereas Chordia, Subrahmanyan, and Tong show that
none of their characteristics are statistically significant in their second subperiod. Green, Hand,
and Zhang (2013) identify 300 published and unpublished characteristics but they do not estimate
characteristic decay parameters as a function of publication or sample-end dates.

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT