Second-best considerations in correcting cognitive biases.

AuthorBesharov, Gregory
  1. Introduction

    Suppose you know someone who is overly confident about his prowess in some realm of endeavor. Time after time, your friend has overestimated his ability on exams or his likelihood of success in business ventures. Surely, this kind of error reduces well-being, and you, as a good friend, should try to teach your friend to be less optimistic.

    The theory of second best suggests that such an intervention would not necessarily help if your friend has other cognitive biases. In a system of interacting biases, the correction of any single one has ambiguous welfare implications. In their classic article on the theory of second best, Lipsey and Lancaster (1956, p. 12) write, "it is not true that a situation in which more, but not all, of the optimum conditions are fulfilled is necessarily, or is even likely to be, superior to a situation in which fewer are fulfilled." If your friend's decision-making is biased in other ways, then an intervention to reduce overconfidence may well reduce his welfare. Furthermore, the excessive confidence of your friend does not imply that there are ways to make him better off or even that his decision-making is suboptimal.

    Studies in psychology and behavioral economics have identified several situations in which correction of a bias may lead to worse decisions. Camerer, Loewenstein, and Weber (1989) discuss the "curse of knowledge." In predicting the behavior of others, agents are unable to ignore the information they have. As a result, agents may become better off when they have less information. In a later article, Loewenstein, O'Donoghue, and Rabin (2003) find that projection bias may be counteracted by another bias in making choices about future consumption. In this setting, working to correct projection bias could lead people to make worse choices. Rabin (1999) offers an example in the context of saving, suggesting that overly high projections of consumption needs in retirement offset the undersaving problem. If so, the consumption-saving decision, though affected by cognitive biases, may still be correct. Kahneman and Lovallo (1993) consider biases and their effects within the firm. In the article most closely related to this one, Benabou and Tirole (2002) consider a formal model in which, among other effects, overconfidence ameliorates a lack of will power.

    Though the previous literature has discussed interactions among cognitive biases, it has given little attention to their correction. Absent the knowledge of correct decisions or consideration of all relevant biases, interventions may reduce welfare. If there is selection on the quality of decisions made, and there are reasons to think that such selection exists, then one might even expect the decisions that emerge from the admittedly biased system to be in some sense "good." While these concerns are standard in second-best situations, their implications in a cognitive bias setting have not been addressed.

    The results are formally demonstrated in the context of an optimization problem with no uncertainty. A person chooses the level of effort to expend in a project today that yields benefits at some near tomorrow. In the model presented, the individual differs from the standard neoclassical agent in that he has cognitive biases. In addition to overestimating the effects of his action, the individual is time-inconsistent and directly feels regret (or, equivalently, self-satisfaction) based on the level of effort provided. Changes in the strengths of these effects lead to changes in the agent's effort provision. Issues in the theory of second best arise when attempts are made to improve decision-making by ameliorating some, but not all, of the biases.

    Any discussion of the second best requires a notion of the first best. For reasons given below, the first-best decision is taken to be the effort provided by an agent who is free from cognitive biases. (The parameterization of preferences admits such an agent as a special case.) Among the many biases discussed in the literature, the ones presented here were chosen not just because they are well established but also because of the insights they yield regarding the difficulty of welfare analysis in the presence of cognitive biases. Cognitive biases can range from problems of statistical inference and systematically incorrect expectations to preference-related issues such as reference points, framing, and regret. Thus, the argument that the agent's true preferences should be considered those of the unbiased agent is strongest for the case of overconfidence: The individual is just wrong about the result of a given level of effort on the outcome. For hyperbolic discounting, one can appeal to the fact that an agent who could commit to an action some time before having to bear the costs of it would choose to act in a time-consistent manner, even though it is arbitrary to claim that the individual's preferences at one time better represent his interests than those at another time. Even more questionable is ignoring the psychic cost of regret. Nonetheless, if one follows the standard formulation of utility as a function of consumption, then the full consequences of bad decisions are borne through their effects on consumption. Any other effect on utility, as through regret, is superfluous for purposes of maximizing consumption in the absence of other distortions. Considering consumption the ultimate aim of effort allows disregarding psychic payoffs and adopting the unbiased agent's utility as the true utility. Because this article aims to demonstrate interaction effects among biases, the choice of specific ones modeled is not crucial. Further, this article investigates the effects of correcting biases. If the cognitive biases represent the true preferences of the agent, there is no reason to change them.

    Studying the interaction of cognitive biases sheds light on one of the vexing questions in the field of behavioral economics: Why don't individuals correct their biases? This article gives three related but separate answers to that question. First, it may be that individuals have limited knowledge of the system of interacting biases. In such a situation, the welfare consequences of correcting some subset of them are ambiguous. Second, when correction of biases is costless, if the dimensionality of the biases is greater than that of the decision (as in the model presented in this article), there may be a set of biases that result in the efficient level of action. An agent whose true welfare is that of an unbiased agent will be indifferent among any elements of the bias space that maximize utility. To an outside observer who can measure biases but does not know the efficient decision, the optimality of the agent's choice will be unrecognized. Third, even an individual with full information about the nature of the biases may rationally choose to correct them only partially when correction is costly. A researcher could observe an...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT