Performance responses to competition across skill levels in rank‐order tournaments: field evidence and implications for tournament design

AuthorMichael Menietti,Karim R. Lakhani,Kevin J. Boudreau
DOIhttp://doi.org/10.1111/1756-2171.12121
Published date01 February 2016
Date01 February 2016
RAND Journal of Economics
Vol.47, No. 1, Spring 2016
pp. 140–165
Performance responses to competition across
skill levels in rank-order tournaments: field
evidence and implications for tournament
design
Kevin J. Boudreau
Karim R. Lakhani∗∗
and
Michael Menietti∗∗∗
Tournaments are widely used in the economy to organize production and innovation. We study
individual data on 2775 contestants in 755 software algorithm developmentcontests with random
assignment. The performance response to added contestants varies nonmonotonically across
contestants of differentabilities, precisely conforming to theoreticalpredictions. Most participants
respond negatively, whereas the highest-skilled contestants respond positively. In counterfactual
simulations, we interpret a number of tournament design policies (number of competitors, prize
allocation and structure,number of divisions, open entry) and assess their effectiveness in shaping
optimal tournament outcomes for a designer.
1. Introduction
Contests and tournaments1have long received wide interest from economists since the sem-
inal work of Lazear (1981). Although explicit tournaments appear rare in the labor market, the
competition for promotion among executives,academics, and others can be modelled as a tour na-
ment. In addition, tournaments have been used to induce technological advances and innovations
London Business School and Harvard Business School; kboudreau@london.edu.
∗∗Har vardUniversity and Crowd Innovation Laboratory at Harvard Institute for Quantitative Social Science; k@hbs.edu.
∗∗∗Crowd Innovation Laboratory at Harvard Institute for Quantitative Social Science; mmenietti@fas.harvard.edu.
We are grateful to members of the TopCoder executive team for their considerable attention, support, and resources in
the carrying out of this project, including Jack Hughes, Rob Hughes, Andy LaMora, Mike Lydon, Ira Heffan, Mike
Morris, and Narinder Singh. For helpful comments, we thank seminar participants at Duke University, Georgia Tech
(REER conference), Harvard Business School, and London Business School. Constance Helfat (Dartmouth) provided
significant stimulating input to this article. The authors would also like to acknowledgefinancial suppor tfrom the London
Business School Research and Materials Development Grant, the Harvard Business School Division of Research and
Faculty Development,and the NASA Tournament Laboratory. All errors are our own.
1In this article, we use the terms contests and tournaments interchangeably to denote rank-order based, relative
performance evaluation incentive schemes.
140 C2016 The Authors The RAND Journal of Economics published by Wiley Periodicals, Inc. on behalf of The
RAND Corporation. This is an open access article under the terms of the Creative Commons Attribution NonCommercial
License, which permits use, distribution and reproduction in any medium, provided the original work is properlycited
and is not used for commercial purposes.
BOUDREAU,LAKHANI AND MENIETTI / 141
throughout history. Well-known “grand challenges” (Kay, 2011; Nicholas, 2011; Brunt, Lerner,
and Nicholas, 2012) include the X-prize for private space flight, Defense Advanced Research
Projects Agency (DARPA) challenges to develop autonomous vehicle technologies, and the Net-
flix contest to improve the company’s movie recommendation algorithm (Murray et al., 2012).
Since the turn of the century, online contest platforms (e.g., InnoCentive, Kagel, and TopCoder)
that continuously host numerous contests have emerged to solve research and development chal-
lenges for commercial companies, nonprofit organizations, and government agencies.2These
platforms have greatly expanded the use of explicit tournaments for compensation in the labor
market.
In this article, we examine the performance response of competitors to the total number of
competitors in a contest. We build on the theoretical framework of rank-order contests advanced
by Moldovanu and Sela (2001, 2006) to clarify argumentsfor a heterogeneous response, in terms
of effort, across competitors of different ability levels. The framework predicts that as the number
of competitors increases, competitors with the lowest ability have little response, competitors
with intermediate ability respond negatively, and competitors with the highest ability respond
positively.
To see the intuition behind the heterogeneousresponse, consider how an existing competitor
responds to added competition. The optimal response of a competitor depends on ability
level. For low ability competitors, the probability of winning is already quite low and adding
competitors might then have little effect on the likelihood of winning and optimal effort level.
By contrast, a competitor of moderate ability will more likely have his probability of winning
and optimal effort level diminished by added competition. However, for a competitor of high
ability who might not have required high effort to win against the bulk of competitors, adding
greater numbers of competitors can increase the likelihood of facing a close competitor–thus
raising the optimal effort level to “stay ahead of the pack.” This is an effort-inducing rivalry or
racing effect (cf., Harris & Vickers, 1987).3
Our main contribution is to estimate the relationship between performance and competition
across the distribution of ability.We study a field context, algorithm programming contests run by
TopCoder,Inc. We use data on 755 cash-prize contests between 2005 and 2007, in which varying
numbers of randomly assigned individuals competed to solve software algorithm problems.
The response to varying numbers of competitors is estimated using a nonparametric, kernel
estimator. We find the specific, heterogeneous relationship predicted by theory.We then estimate
a parameterized version of the Moldovanu and Sela (2001) model, revealing results consistent
with the nonparametric model and affirming the usefulness of this framework. Next, weconsider
a series of counterfactual contest design questions based on the structural estimates. We examine
the performance and cost implications of several design dimensions: the number of competitors,
the number of skill divisions, the distribution of prizes, and open entry to tournaments. A
range of contest design policies allows statistically and economically significant manipulation of
tournament outcomes. Given the widespread use of tournaments in the economy and potentially
different objectives of tournament sponsors, these policies provideuseful “levers” for tournament
designers. For example, sales managers may run contests with the goal of maximizing total sales
(Casas-Arce and Mart´
ınez-Jerez, 2009), whereas those managing a research and development
tournament may only be concerned with attracting the best possible solution (Fullerton and
McAfee, 1999).
The article proceeds as follows. In Section 2, we discuss the related literature on tournaments
and all-pay auctions. We develop predictions based on the Moldovanu and Sela (2001) model
of tournaments in Section 3. Section 4 describes the empirical context and data set. Section 5
2The US government recently passed legislation giving prize-based procurement authority to all federal agencies
(Bershteyn and Roekel, 2011).
3Analogous arguments regarding countervailing effects of competition on innovationincentives have been made
using different setups and distinct mechanisms in areas such as market competition (Aghion et al., 2005) and patent races
(Schmalensee, Armstrong, and Willig, 1989).
C
The RAND Corporation 2016.

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT