Biased recommendations from biased and unbiased experts

Date01 June 2019
AuthorWonsuk Chung,Rick Harbaugh
DOIhttp://doi.org/10.1111/jems.12293
Published date01 June 2019
Received: 17 August 2016
|
Revised: 30 April 2018
|
Accepted: 17 August 2018
DOI: 10.1002/jems.12293
ORIGINAL ARTICLE
Biased recommendations from biased and unbiased
experts
Wonsuk Chung
1
|
Rick Harbaugh
2
1
Korea Insurance Research Institute,
Seoul, South Korea
2
Department of Business Economics and
Public Policy, Kelley School of Business,
Indiana University, Bloomington, Indiana
Correspondence
Rick Harbaugh, Department of Business
Economics and Public Policy, Kelley
School of Business, Indiana University,
Bloomington 47405, IN.
Email: riharbau@indiana.edu
Abstract
When can you trust an expert to provide honest advice? We develop and test a
recommendation game where an expert helps a decision maker choose among
two actions that benefit the expert and an outside option that does not. For
instance, a salesperson recommends one of two products to a customer who
may instead purchase nothing. Subject behavior in a laboratory experiment is
largely consistent with predictions from the cheap talk literature. For sufficient
symmetry in payoffs, recommendations are persuasive in that they raise the
chance that the decision maker takes one of the actions rather than the outside
option. If the expert is known to have a payoff bias toward an action, such as a
salesperson receiving a higher commission on one product, the decision maker
partially discounts a recommendation for it and is more likely to take the
outside option. If the bias is uncertain, then biased experts lie even more,
whereas unbiased experts follow a political correctness strategy of pushing the
opposite action so as to be more persuasive. Even when the expert is known to
be unbiased, if the decision maker already favors an action the expert panders
toward it, and the decision maker partially discounts the recommendation. The
comparative static predictions hold with any degree of lying aversion up to pure
cheap talk, and most subjects exhibit some limited lying aversion. The results
highlight that the transparency of expert incentives can improve communica-
tion, but need not ensure unbiased advice.
KEYWORDS
cheap talk, pandering, persuasion, political correctness, transparency
1
|
INTRODUCTION
Experts provide information about different choices to decision makers. But experts often benefit more from some
choices than from others, such as a salesperson who earns a higher commission on a more expensive product and
receives nothing if the customer walks away. Can an experts recommendation still be persuasive with such conflicts of
interest, or will it be completely discounted? What if the decision maker suspects that the expert benefits more from one
choice but is not sure? And how is communication affected if the expert knows that the decision maker is already
leaning toward a particular choice?
These issues are important to the design of incentive and information environments in which experts provide advice.
In recent years, the incentives of mortgage brokers to recommend highcost loans (Agarwal, Amromin, BenDavid,
Chomsisengphet, & Evanoff, 2014), of financial advisors to provide selfserving advice (Egan, Matvos, & Seru, 2016), of
J Econ Manage Strat. 2019;28:520540.wileyonlinelibrary.com/journal/jems520
|
© 2018 Wiley Periodicals, Inc.
doctors to prescribe expensive drugs (Iizuka, 2012), and of media firms to push favored agendas (DellaVigna & Kaplan,
2007) have all come under scrutiny. Can such problems be resolved by requiring disclosure of any conflicts of interest,
or is it necessary to eliminate biased incentives?
1
And are unbiased incentives always sufficient to ensure unbiased
advice?
To gain insight into such questions, several papers have applied the cheap talk approach of Crawford and Sobel
(1982) to discrete choice environments where an expert has private information about different actions and the decision
maker has an outside option that is, the experts least favored choice (e.g., Chakraborty & Harbaugh, 2007, 2010; Che,
Dessein, & Kartik, 2013; De Jaegher & Jegers, 2001).
2
Based on this literature, we develop and test a simplified
recommendation game in which an expert knows which of two actions is better for a decision maker and may have an
incentive to push one of the actions more than the other. The decision maker can take either action or pursue an
outside option. Despite its simplicity, the model captures several key phenomena from the literature.
First, for sufficient payoff symmetry, recommendations are persuasivein that they benefit the expert by reducing
the chance that the decision maker walks away without taking either action. Even though a recommendation is only
cheap talk, it is still credible since it raises the expected value of one action at the opportunity cost of lowering the
expected value of the other action. Such comparative cheap talkis persuasive since the higher expected value of the
recommended action is now more likely to exceed the decisionmakers outside option. For instance, a customer is
more likely to make a purchase if a recommendation persuades him that at least one of two comparably priced products
is of high quality. In our experimental results, we find that recommendations are usually accepted and are almost
always accepted when the decisionmakers outside option is unattractive.
Second, when the expert is known to be biased in the sense of having a stronger incentive to push one action, a
recommendation for that action is discountedin that the decision maker is more likely to ignore the recommendation
and stick with the outside option. Therefore, in equilibrium, the expert faces a tradeoff where one recommendation
generates a higher payoff if it is accepted, while the other recommendation is more likely to be accepted. In our
experiment, we find that experts are significantly more likely to recommend the more incentivized action than the other
action, whereas decision makers are significantly less likely to accept a recommendation for the more incentivized
action than the other action.
Third, when the decision maker is known to already favor one action before listening to the expert, the expert
benefits by panderingto the decision maker and recommending that action even when the other action is actually
better. Hence, biased recommendations can arise even when the experts incentives for each action are the same (Che
et al., 2013). The decision maker anticipates such pandering and, just as in the asymmetric incentives case, discounts a
recommendation for that action so that the incentive to lie is mitigated. In our experiment, we find that experts are
significantly more likely to recommend the favored action than the other action, and decision makers are significant
more likely to discount such a recommendation than in the symmetric case.
Finally, when the decision maker is unsure of whether the expert is biased toward an action, a recommendation for
that action is suspicious and hence discounted by the decision maker, so an unbiased expert has a political
correctnessincentive to recommend the opposite action even if it is not the better choice (cf., Morris, 2001). For
instance, if a salesperson is suspected to benefit more from selling one product than another product but in fact has
equal incentives, then pushing the other product is more attractive since it is more likely to generate a sale. In our
experiment, we find that, as predicted, lack of transparency induces unbiased experts to make the opposite
recommendation from that made by biased experts. Decision makers do not appear to sufficiently discount such
recommendations, suggesting that they may not always anticipate how lack of transparency warps the incentives of
even unbiased experts.
3
Most of the experimental literature on cheap talk has focused on testing different implications of the original
Crawford and Sobel (1982) model and typically finds that experts are somewhat lying averse (e.g., Dickhaut, McCabe, &
Mukherji, 1995).
4
Based on this literature, we expect subjects to be reluctant to lie, and in particular we expect the
strength of this aversion to vary across subjects (Gibson, Tanner, & Wagner, 2013).
5
Therefore, we depart from a pure
cheap talk approach and assume that the expert has a lying cost that is drawn from a continuous distribution with
support that could be arbitrarily concentrated near zero.
6
Lying aversion is not necessary for communication in our
game but its inclusion makes the model more testable by eliminating extra equilibria that involve babbling/pooling, that
have messages implying the opposite of their literal meanings, and that have strategic mixing between messages.
7
In our experimental tests, we cannot control for varying subject preferences against lying, so the exact lying rates and
acceptance rates cannot be predicted beforehand. However, we find that the comparative static predictions of the model
are the same for any distribution of lying costs, so we can test these predictions even without knowing the exact
CHUNG AND HARBAUGH
|
521

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT