Investment in concealable information by biased experts

AuthorNavin Kartik,Frances Xu Lee,Wing Suen
Date01 March 2017
DOIhttp://doi.org/10.1111/1756-2171.12166
Published date01 March 2017
RAND Journal of Economics
Vol.48, No. 1, Spring 2017
pp. 24–43
Investment in concealable information
by biased experts
Navin Kartik
Frances Xu Lee∗∗
and
Wing Suen∗∗∗
We study a persuasion game in which biased—possibly opposed—experts strategically acquire
costly information that they can then conceal or reveal. We show that information acquisition
decisions are strategic substitutes when experts have linear preferences over a decision maker’s
beliefs. The logic turns on how each expert expects the decision maker’s posterior to be affected
by the presence of other experts should he not acquire information that would turn out to be
favorable. The decision maker may prefer to solicit advice from just one biased expert even when
others—including those biased in the opposite direction (singular)—are available.
1. Introduction
A prevalent view is that a decision maker (DM) benefits from consulting more experts,
particularly when these experts have opposing interests in the DM’s action. This belief underlies
the design of many decision-making processes: judiciaries listen to both defendants and plaintiffs;
Congress hears from proponents and opponents of a bill; and the US Food and Drug Adminis-
tration (FDA) relies on evidence furnished by companies that seek approval from the FDA and
on independent investigators. These and other institutions strive to improve the accuracy of their
decisions by soliciting information from multiple, often interested, parties.
Starting with Milgrom and Roberts (1986), the literature on persuasion or voluntary-
disclosure games has formally shown that in many—though not all—settings, adversarial proce-
dures do facilitate information revelation from interested agents. The literature’s focus has largely
been on the revelation of exogenously given information.1In practice, however, information ac-
quisition is endogenous with significant costs: prosecutors juggle many cases and optimize how
Columbia University; nkartik@gmail.com.
∗∗Loyola University Chicago; francesxu312@gmail.com.
∗∗∗University of Hong Kong; wsuen@econ.hku.hk.
We thank Claude Fluet, Bruno Jullien, Satoru Takahashi, Rakesh Vohra, the Editor (Ben Hermalin), and anonymous
referees for their comments and valuable advice. TeckYong Tanprovided excellent research assistance.
1We discuss the literature in more detail subsequently, but a notable exception is Dewatripont and Tirole (1999).
They allowthe DM to commit to outcome-based payments for the agents; we are instead interested in sequentially rational
24 C2017, The RAND Corporation.
KARTIK, LEE AND SUEN /25
much time to spend on searching for evidence in each case, lobby groups decide how many and
which consultants to hire, and drug companies face an array of costly clinical trials that they can
choose among. Untrained intuition does not illuminate how an interested agent’s incentives to
acquire information are affected by the presence of an opposed agent. One may reckon that the
incentive to acquire information increases because more favorable evidence is needed to counter
the other agent, or one might conjecture the incentive decreases because the DM becomes less
responsive to any one agent’s information.
This article endogenizes information acquisition in a multiple-expert disclosure game;
in particular, we study the impact of adding experts. In our model, detailed in Section 2,
experts first choose how much information to acquire and then what information to disclose.
Following Grossman (1981) and Milgrom (1981), we view information as hard evidence that
can be concealed but not falsified. We assume that experts simply care about the DM’s belief,
independently of the true “state of the world.” The DM, on the other hand, benefits from
information about the state. We depart from much of the disclosure literature (e.g., Milgrom
and Roberts, 1986; Shin, 1998; Bhattacharya and Mukherjee, 2013) by assuming that informed
experts do not necessarily receive the same information; in our baseline model, they receive
signals that are independent conditional on the state.
Experts’ disclosure behavior in this setting takes the form of “sanitization strategies” (Shin,
1994): each expert simply conceals information that is unfavorable to his own cause while
revealing favorable information.2Our main result, developed in Section 3, is that adding more
interested experts (either like-minded or opposed) can harm the DM because it reduces each
expert’s incentive to invest in costly information—even if experts’ disclosure behavior remains
unaffected by the number of other experts. In other words, the DM must trade off individual
quality with quantity; fewer but better informed experts can be preferable to a larger number of
less-informed experts. More broadly, weestablish that experts’ information acquisition decisions
are strategic substitutes when experts have linear preferences over the DM’s expectation of the
state of the world. This linearity assumption plays a key role in our analysis.
The logic underlyingour findings is as follows. An expert benefits from acquiring information
only when he obtains evidence that he will disclose (i.e., favorable information). In such an
event, having evidence allows him to raise the DM’s belief from the skeptical belief associated
with nondisclosure. When there are multiple experts, the DM’s belief is influenced by all their
messages (either their evidence or claim to ignorance). Crucially, from any one expert’s point of
view,whenever he discloses his information the expected belief of the DM is independent of any
other expert’s equilibrium behavior; this is a consequence of the iterated expectations property of
Bayesian updating. However, an expert’s expectation of the DM’s belief conditional on favorable
information that is not disclosed does depend on other experts’ equilibrium behavior. The reason
is that the DM’s skeptical nondisclosure belief is “wrong” from the point of view of the expert
with favorable information; as established by Kartik, Lee, and Suen (2015), the more informative
other experts are in the sense of Blackwell (1951, 1953), the more their messages will, on average,
correct this belief.3Thus, any expert has less to lose by not acquiring (and disclosing) favorable
information in the presence of other experts, which in turn implies that his incentive to invest in
information is diminished when there are more experts.
The same logic implies that information acquisition efforts are strategic substitutes across
experts. From the perspective of any one expert, another expert can be viewed as an endogenous
experiment, the informativeness of which depends on the information acquisition (and disclosure
decision making. Moreover, the bulk of their analysisconcer ns incentivizingagents who are not intrinsically interested
in the DM’saction.
2The classic unraveling phenomenon does not occur because there is positive probability that an expert does not
have any hard information, as in Dye (1985); upon receivingunf avorable information, an expert can feign ignorance.
3Kartik, Lee, and Suen (2015) do not study endogenous information acquisition. Furthermore, Section 4 of the
current article considers a setting in which Kartik, Lee, and Suen’s (2015) general result cannot be applied because
informed experts’ signals are not conditionally independent.
C
The RAND Corporation 2017.

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT