Gaming and strategic opacity in incentive provision

AuthorFlorian Ederer,Margaret Meyer,Richard Holden
Published date01 December 2018
DOIhttp://doi.org/10.1111/1756-2171.12253
Date01 December 2018
RAND Journal of Economics
Vol.49, No. 4, Winter 2018
pp. 819–854
Gaming and strategic opacity in incentive
provision
Florian Ederer
Richard Holden∗∗
and
Margaret Meyer∗∗∗
Westudy the benefits and costs of “opacity” (deliberate lack of transparency)of incentive schemes
as a strategy to combat gaming by better informed agents. In a two-task moral hazard model
in which only the agent knows which task is less costly, the agent has an incentive to focus his
effort on the less costly task. Opaque schemes, which make a risk-averse agent uncertain about
which task will be more highly rewarded, mitigate such gaming but impose more risk. Weidentify
environments in which opaque schemes not only dominate transparent ones, but also eliminate
the costs of the agent’s hidden information.
1. Introduction
A fundamental consideration in designing incentive schemes is the possibility of gaming:
exploitation of an incentive scheme by an agent for his own self-interest to the detriment of
the objectives of the incentive designer. Gaming can take numerous forms, among them (i)
diversion of effort away from activities which are socially valuable but difficult to measure and
reward, toward activities that are easily measured and rewarded; (ii) exploitation of the rules of
classification to improve apparent, though not actual, performance; and (iii) distortion of choices
YaleUniversity; florian.ederer@yale.edu.
∗∗University of New South Wales; richard.holden@unsw.edu.au.
∗∗∗Oxford University and CEPR; margaret.meyer@nuffield.ox.ac.uk.
We wish to thank DavidMartimor t (the Editor) and twoanonymous referees for comments and suggestions that greatly
improvedthe article. Weare also g rateful to Philippe Aghion, Sushil Bikhchandani, Simon Board,Patrick Bolton, Jeremy
Bulow,Vince Crawford,Kate Door nik, Mikhail Drugov, Guido Friebel, Edoardo Gallo, Robert Gibbons, Edward Glaeser,
OliverHart, Bengt Holmstr ¨
om, Ian Jewitt,Navin Kartik, Paul Klemperer, Shmuel Leshem, Phillip Leslie, Steven Lippman,
Steven Matthews, John Moore, Andy Newman, In-Uck Park, Charles Roddie, Jesse Shapiro, Ferdinand von Siemens,
Jason Snyder,Alexander Stremitzer, Jeroen Swinkels, and Jean Tirole for helpful discussions and suggestions, as wellas to
seminar participants at Bonn, Boston University,Bristol, Chicago, Columbia, Frankfurt, Georgia Tech, Gerzensee, HBS,
LSE, MIT, Michigan, NES Moscow, Oxford, Queen’s, Rotterdam, UCL, UCLA, USC, and Yale. Holden acknowledges
the Australian Research Council Future Fellowship FT130101159 and the Centre for International Financial Regulation
at UNSW for financial support.
C2018, The RAND Corporation. 819
820 / THE RAND JOURNAL OF ECONOMICS
about timing to exploit temporarily high monetary rewards even when socially efficient choices
have not changed. Evidence of the first type of gaming is provided by Burgess, Propper, Ratto,
and Tominey (2017) and Carrell and West (2010), of the second type by Gravelle, Sutton, and
Ma (2010), and of the third type by Oyer (1998), Larkin (2014), and Forbes, Lederman, and
Tombe (2015).1The costs of gaming are exacerbated when the agent has superior knowledge of
the environment: this makes the form and extent of gaming harder to predict and hence, harder
to deter.
It has been suggested that lack of transparency—deliberate opacity about the criteria upon
which rewards will be based and/or how heavily these criteria will be weighted—can help deter
gaming. This idea has a long intellectual history. It dates back at least to Bentham (1830) who
argued that deliberate opacity about the content of civil service selection tests would lead to the
“maximization of the inducement afforded to exertion on the part of learners, by impossibilizing
the knowledge as to what part the field of exercise the trial will be applied to, and thence making
aptitude of equal necessity in relation to every part.”2
More recently,responding to documented gaming of the highly transparent incentive schemes
which score National Health Service organizations in England according to published lists of
precisely defined performance indicators, Bevan and Hood (2004) argued in the British Medical
Journal, “What is needed are ways of limiting gaming. And one way of doing so is to introduce
more randomness in the assessment of performance, at the expense of transparency.”They invoke
the “analogy [ . .. ] with the use of unseen examinations, where the unpredictability of what the
questions will be means that it is safest for students to cover the syllabus.” They reason that
making it harder for hospitals to predict what performance measures will be used and how they
will be weighted, coupled with hospitals’ risk aversion, will reduce the hospitals’ incentives
for gaming. Similarly, Dranove, Kessler, McClellan, and Satterthwaite (2003) document that
in the United States, report cards for hospitals “encourage providers to ‘game’ the system by
avoiding sick patients or seeking healthy patients or both” and they argue that such gaming is
facilitated by “risk-averse providershaving better information about patients’ conditions” than do
the analysts who compile the report cards. They present evidence that the increased transparency
of incentive schemes for physicians and hospitals providedby report cards increased gaming and
even decreased patient and social welfare.3
The costs of transparency have also been discussed in the context of gaming, by law school
deans, of the performance indicators used by U.S. News to produce its influential law school
rankings. The ranking methodology is transparent and employsa linear scoring r ule incorporating
multiple performance indicators.4There is significant evidence that law schools deploy a range
of strategies that exploit their informational advantage over U.S. News to increase their measured
performance. Examples include cutting the number of full-time students to boost median LSAT
scores and GPAs, creating make-work jobs for their own graduates to inflate the number in
1Burgess et al. (2017) and Gravelle, Sutton, and Ma (2010) study United Kingdom public sector organizations
(an employment agency and the National Health Service, respectively), Carrell and West (2010) use data from post-
secondary education, whereas Oyer (1998), Larkin (2014), and Forbes, Lederman, and Tombe (2015) examine private
sector organizations (salespeople and executives across various industries, enterprise software vendors, and airlines,
respectively).
2Bentham, 1830/2005, Ch. IX, §16, Art 60.1.
3Relatedly, Googlehas experienced manipulation of its search results by some retailers. Although many retailers
have been seeking greater transparency from Google about its search algorithm, Googlehas responded by moving in the
direction of greater opacity to prevent manipulation (Structural Search Engine Optimization, Google Penalty Solutions,
November 4, 2011, www.re1y.com/blog/occupy-google-blog.html). Motivated in part by this debate, Frankel and Kartik
(2014) develop a signalling model of gaming in which the information conveyed bysignals (e.g., prominence in search
results) about agents’ hidden characteristics (e.g., intrinsic relevance to the query) is “muddled” because agents are also
privatelyinfor med about their gaming ability.Other theoretical treatments of gaming of incentive schemes include Jehiel
and Newman (2011) and Barron, Georgiadis, and Swinkels (2017).
4The weights in the scoring rule are quality perception (40%), selectivity (25%), placement success (20%),
and faculty resources (15%) (U.S. News, March 11, 2013, www.usnews.com/education/best-graduate-schools/top-law-
schools/articles/2013/03/11/methodology-best-law-schools-rankings).
C
The RAND Corporation 2018.
EDERER, HOLDEN AND MEYER / 821
employment, and heavilyadvertising their faculty’s scholarship to U.S.News.5Law scholars (e.g.,
Osler, 2010) have arguedthat g reater opacity in the ranking methodologycould mitigate gaming,
and U.S. Newshas itself signalled its intention to move away from being “totallytransparent about
key methodology details.”6
Finally, one view as to why courts often prefer standards—which are somewhat vague—to
specific rules is that standards mitigate incentives for gaming. For example, Weisbach (2000)
argues that vagueness can reduce gaming of taxation rules, and Scott and Triantis (2006) argue
that vague standards in contracts can improve parties’ incentivesto fulfill the spirit of the contract
rather than focusing on satisfying only the narrowly defined stipulations.
The examples discussed above suggest that “opacity” (i.e., lack of transparency)of incentive
schemes can be beneficial in reducing gaming, especially when agents have superior knowledge
of the environment, when incentive designers care about multiple aspects of performance, and
when gaming takes the form of agents’ focusing efforts on easily manipulableindicators. This line
of argument is, however, incomplete. If agents are risk-averse, then the additional risk imposed
by opaque schemes is per se unattractive to them. Understanding when and why opaque schemes
are used thus requires analyzing the trade-off between their incentivebenefits and their risk costs.
The present article provides such an analysis.
Our analysis incorporates three vital ingredients that are featured in all of our motivating
examples: (i) the agent’s superior information about the environment, (ii) the agent’s risk aversion,
and (iii) the incentive designer’s need for the agent to choose a relatively balanced allocation of
efforts across activities. This suite of ingredients (along with a contractual restriction to incentive
schemes that are ex post linear) delivers two main messages. First,transparent incentive schemes,
even when they involve menus, suffer dramatically from the problem of gaming by the agent.
Second, opaque incentive schemes not only mitigate the problem of gaming but can generate a
higher payoff for the principal.7
In our model, “opacity” corresponds to a lack of transparency about the weights on per-
formance indicators that are used to determine rewards. Motivated by the examples discussed
above, we build on Holmstrom and Milgrom’s (1991) multitask principal-agent model in which
a risk-averse agent performs two tasks, which are substitutes in his cost-of-effort function, and
receives compensation that is linear in his performance on each of the tasks. These linear contracts
(which have been widely studied) are “transparent” in that the agent faces no uncertainty about
the rate at which performance on each of the tasks is rewarded. The principal’s benefit function
is complementary in the agent’sefforts on the two tasks; other things equal, she prefers to induce
both types of agent to choose balanced efforts.8Into this familiar setup, we introduce superior
knowledge of the environment on the part of the agent. There are two types of agent, and only
the agent knows which type he is. One type has a lower cost of effort on task 1, and the other has
a lower cost of effort on task 2.9
The privately informed agent games transparent incentive schemes by choosing effort allo-
cations that are excessively (from an efficiency perspective) sensitive to his private information.
5Law School Rankings Reviewedto Deter “Gaming,” Wall Street Journal, August 26, 2008.
6U.S. News, May 20, 2010, www.usnews.com/education/blogs/college-rankings-blog/2010/05/20/us-news-takes-
steps-to-stop-law-schools-from-manipulating-the-rankings.
7The terms “opaque” and “transparent” may have alternative definitions in other contexts, but here, where we
confine attention to compensation schedules that are ex post linear, an “opaque” incentivescheme will always be one that
leaves the agent, when choosing efforts, uncertain about the incentive coefficients he will face, whereas a “transparent”
scheme will be one under which the agent faces no such uncertainty.
8Our model, like Holmstrom and Milgrom’s(1991), incorporates shocks to measured perfor mance. These shocks
are not essential for our two main messages, given our focus on contracts that are ex post linear. In fact, as shown in
Section 6, our findings about the benefits of opaque incentive schemes would be even stronger in the absence of such
shocks. Nonetheless, it is natural to include them in the analysis; if the agent’s efforts were directly observable by the
principal, then the problem of moral hazard could be trivially solved bya so-called “forcing contract.”
9The analysis would be very similar if the agent types differed with respect to the task on which they weremore
productive.
C
The RAND Corporation 2018.

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT