The Paternalistic Bias of Expert Advice

AuthorJohn P. Lightle
Published date01 December 2014
DOIhttp://doi.org/10.1111/jems.12070
Date01 December 2014
The Paternalistic Bias of Expert Advice
JOHN P. LIGHTLE
Department of Economics
Florida State University
Tallahassee,FL 32306
jlightle@fsu.edu
Using a theoretical model of noisy expert advice, I show that language inflation can be a rational
response to the vagueness of language. Experts will tend to overstate their positions to a like-
minded decision maker (DM) and this constitutes a Pareto improvement over sending a sincere
message. When the message space is bounded, overstatement may interfere with the DM’s ability
to aggregate the experts’ information, because communication is less precise when the same
message is sent for many states of the world. Despite this, I show that experts are willing to send
either the most extreme message to the DM, or a partially overstated message, because by doing
so the expert can decrease the likelihood the DM makes a suboptimal decision due to his subjective
interpretation of the advice. Because the expert inflates his message toward the policy he believes
the DM would be better off choosing, rather than sincerely revealing his information, I refer to
this behavior as a paternalistic bias.
1. Introduction
Employing the advice of experts is an essential part of being a successful decision maker
(DM) in the public arena. When facing a particularly complicated decision, a decision
maker often faces constraints which make acquiring and/or processing all the relevant
information inefficient or impossible. For example, consider a CEO who must decide
whether or not to buyout a rival company.Analysts will attempt to explain the expected
costs and benefits of acquisition to the CEO, but the ultimate decision is his alone.
Conventional wisdom suggests that, for the good of the company, analysts should be
forthright with their knowledge, giving as accurate a description of the state of the
world as possible. However, this paper challenges that intuition when you include an
often dismissed, yet quite realistic, caveat: communication between analyst and CEO is
inherently imprecise. In other words, the analyst’s intended description of the state of
the world is not equal to the CEO’s interpretation of this message. In this case, using
the model presented in this paper, I identify a novel “paternalistic bias” on the part of
an advisor—that is, an endogenous bias that occurs because an advisor believes that by
sending an insincere message about the state of the world, he can help the DM make a
better choice than had he revealed his information sincerely.
Let us now consider more carefully the decision made by the CEO of Company
X of whether or not to buyout Company Y. Although there may be countless factors
which determine the profitability of such an acquisition, for simplicity we will assume
there are only two relevant pieces of information: the current value of Company Y to
The author thanks seminar participants at the Ohio State University Theory Workshopand the FSU Workshop
on Experimental Game Theory, John Kagel, David Cooper, James Peck, Dan Levin, P.J.Healy, Navin Kartik,
two anonymous referees, and particularly Massimo Morelli, for their expert advice. All errors are my own.
C2014 Wiley Periodicals, Inc.
Journal of Economics & Management Strategy, Volume23, Number 4, Winter 2014, 876–898
The Paternalistic Bias of Expert Advice 877
Company X, and the net increase in future profitability of Company X after purchasing
Company Y. The CEO has one team of industrial experts who will evaluate the quality
and usefulness of Company Y’s physical assets, and another team of economic forecasters
who will evaluate the expected market conditions after the acquisition of Company Y.
We will say that each team of experts conducts a sufficient amount of research to obtain
the true state of the world with arbitrarily high precision, and makes a report of this
information to the CEO. However, because the information obtained is highly technical,
the experts cannot fully communicate their findings to the CEO. In other words, due to
the CEO’s subjective interpretation of the relevant information, the message which the
teams of experts intended for the CEO to hear is not exactly the same message that the
CEO actually hears.
The profitability of the buyout depends on the information obtained by both teams
of experts, and in this scenario each team is a layman when it comes to the other field
of expertise. Neither group of experts is capable of making an informed final decision
and the CEO becomes a necessary part of the decision process if only for his role as a
final information aggregator.The experts and the CEO have perfectly aligned incentives,
both want to proceed with the buyout if and only if it is profitable to do so. Despite this
fact, I show in this paper that the two teams of experts would each rationally choose to
bias their report of the state of the world in favor of whichever outcome they believe is
most likely to be optimal given their information.
To make the example more concrete, let θ1represent the true state of the world
related to Company Y’s current assets, and let θ2be the true state of the world related
to the expected net increase in discounted future profits after acquiring Company Y.
Without loss of generality, we normalize 0 to be smallest possible value of θ1or θ2,
and normalize 1 to be the largest possible value. For simplicity, say that θ1and θ2are
independent and drawn from distributions which are common knowledge. The payoff
to Company X of the buyout is an increasing function of θ1and θ2,sayθ1+θ2,andthe
buyout is optimal only if θ1+θ2>1. The two teams of experts will then make a report
(or a message) to send to the CEO which will be represented as m1and m2, with each
message constrained to the set [0, 1].1The CEO will not know the true messages sent
by the experts, but will instead receive an unbiased noisy signal of each message sent.
Given these two signals, the CEO must decide whether or not to takeover Company Y.
For the purposes of this example, we will say that whether or not the CEO makes
the payoff maximizing decision is an important distinction for the CEO and the two
teams of experts. Rather than being a continuous function of θ1+θ2, we will assume
that utility of both the CEO and experts depend entirely on whether the right decision
was made. One reason for such a utility function could be that the CEO and his teams
of experts will be fired if and only if they make the wrong decision with respect to the
buyout. We can then assume they receive a utility of 1 if the takeover occurred when
θ1+θ2>1 or if the takeover did not occur when θ1+θ2<1, and a utility of 0 otherwise.
Therefore, their expected utility is the probability that the right decision was made.
1. This paper focuses on the case where an advisor has a message space as rich the space from which
her information is drawn. There are interesting examples of insincere behavior when messages are restricted,
for example, voting. Austen-Smith and Banks (1996) demonstrate that even when all voters have common
preferencesover outcomes, sincere voting is not rational, due to the inferences made about others’ information.
(See also, Feddersen and Pesendorfer, 1996; Feddersen and Pesendorfer, 1998.) The addition of a round of
deliberation before a vote takes place can eliminate nonsincere voting behavior (Coughlan, 2000; Gerardi and
Yariv, 2007), which reduces the game to one in which the message space is no longer insufficient.

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT