Deciding by default.

AuthorSunstein, Cass R.
PositionUse of administrative default rules to direct personal choice - VI. Personalized Default Rules through Conclusion, with footnotes, p. 48-57

VI. PERSONALIZED DEFAULT RULES

My focus thus far has been on default rules that are impersonal in the sense that they apply to all members of the relevant population, subject to the ability to opt-out. But as I have noted, some default rules are highly personalized. Such approaches draw on available information about which approach best suits different groups of people, and potentially each individual person, in the relevant population.

  1. The Best of Both Worlds?

    We could imagine a continuum of personalized approaches, from the most fine-grained to the far more crude. In principle, choice architects could design default rules for every one of us. Perhaps this idea seems far-fetched, but in the fullness of time, private and public institutions are likely to use a large number of personalized default rules. In fact we are already heading in that direction. Smartphone data, for example, can be (and has been) mined to ascertain personality traits, and those traits can in turn be used to personalize services on smartphones. (182) We can easily imagine the use of website browsing data to personalize a range of services and default options. Google already does something quite like this. In many contexts, it will be possible to move from active choosing to personalized default rules, as choice architects build such defaults for individuals on the basis of what they have actively chosen in the past.

    In their ideal form, personalized default rules might be thought to produce the best of both worlds. Like impersonal default rules, they reduce the burdens of decision and simplify life. But like active choosing, they promote individualization and increase accuracy by overcoming the many problems associated with one-size-fits-all approaches.

    Of course the idea of personalized default rules raises serious concerns. Some of these involve narrowing one's horizons; others involve personal privacy. I shall turn to those concerns in due course. But at least in some contexts, the design of such personalized rules would be a great boon. The key advantage of such rules is that they are likely to be more fine-grained and thus beneficial than "mass" default rules. (183) As technology evolves and information accumulates, it should become increasingly possible to produce highly personalized defaults, based on people's own choices and situations. For this reason, there will be promising opportunities to use default rules to promote people's welfare.

    Every day, family members and friends use the equivalent of personalized default rules. They tend to know what people like in various domains. With respect to conversation, food, restaurants, vacations, romance, and more, they use those personalized defaults for people's benefit. They do not ask, in every case, for an active choice, which would make life more complicated and potentially even intolerable. Sometimes spouses order for one another at restaurants or select clothing for each other, using the functional equivalent of default rules and pursuant to an implicit delegation. (184) Indeed, a large part of what it means to be a spouse, a partner, or a close friend is to be able to identify personalized defaults. (By contrast, strangers rely on impersonal ones, which may cause big trouble.)

    Especially as technology develops, an important function of marketing and marketing research should be to gather knowledge of this kind (subject to safeguards for privacy, taken up below). Indeed, such marketing and research might well become--and now appear to be becoming--standard fare. (185) Similar efforts are being made in the area of political campaigning as well, with something close to the functional equivalent of personalized defaults. (186)

  2. Not Quite the Best of Both Worlds? (187)

    1. Problems

      Notwithstanding their many virtues, personalized default rules are not without disadvantages, even if we put privacy to one side. Most obviously, they do not promote learning, and hence they do not promote and may impede the development of (increasingly) informed preferences. Consider the context of health insurance. People might be defaulted into a plan that suits their needs, which seems unobjectionable--but if so, they do not have the opportunity to learn, which might prove important in the long term. Perhaps it is best to require active choosing, so that people know more about health insurance and about their health care needs. Learning can be important, and active choosing promotes learning while defaults do not.

      Or consider the analogy of books and music. We have seen that on the basis of one's past choices, it is possible, indeed easy, to offer advice, or even to suggest defaults, that reflect what one likes. If you have liked books by a certain mystery writer or science fiction writer, there is a good chance that you will like books by other identifiable mystery writers or science fiction writers. If you have liked music by certain singer-songwriters, companies can identify other singer-songwriters whom you will enjoy; this is how Pandora works its magic. (188) But people's preferences change over time, especially if they are able to learn, and if people are defaulted into options that simply reflect their current "likes," such learning will not occur. In multiple domains, serendipity has great value, as people encounter, entirely serendipitously, activities and products that do not in any way reflect their past choices. The problem, in short, is that if defaults are based on such choices, the process of development might be stunted. When your experiences are closely tailored to your past choices, your defaults are personalized, but you will also be far less likely to develop new tastes, preferences, and values.

      In the context of communications generally, some people have expressed concern about the risks associated with an "architecture of control," in which people create a kind of "Daily Me"--an informational universe that is entirely self-selected. (189) Imagine, for example, that people are able to use perfect filters, so that they can see and hear what they want and exclude everything else. In some ways, this would be a great boon, and we do appear to be moving in its direction. Now imagine that if producers learn about your past choices, they could, very much in the style of Pandora, provide you with other things that you want to see and hear--so that the method of selection is based on projections from your past choices. This too might be a great boon. But insofar as it ensures a kind of narrowing, and prevents people from expanding their horizons, it is nothing to celebrate. Personalized echo chambers may produce individual and social harm. (190) An "architecture of serendipity" has large advantages.

      A genuinely extreme case would be a political system with personalized voting defaults, so that people are defaulted into voting for the candidate or party suggested by their previous votes, subject to opt-out. In such a system, people would be presumed to vote consistently with their past votes, to such an extent that they need not show up at the voting booth at all, unless they wanted to indicate a surprising or contrary preference. Such a system would not entirely lack logic. It would certainly reduce the burdens and costs of voting, especially for voters themselves, who could avoid a trip to the polls, assured that the system would register their preferences.

      But there is a (devastating) problem with an approach of this kind, which has to do with what might be called the internal morality of voting. The very act of voting is supposed to represent an active choice, in which voters are selecting among particular candidates. In other contexts, there is not an equivalent internal morality, but active choosing is an individual and social good precisely because it promotes learning over time, and thus the development of informed, broadened, and perhaps new preferences.

      Personalized default rules have other disadvantages. We have seen that people tend to stick with the default, and this is true whether it is impersonal or personalized. Sticking with the default can lead to feelings of regret. There is empirical support for this proposition: In the context of retirement plans, those who passively stayed with the default showed more regret than those who engaged in active choosing. (191) It is possible that such regret will amount to a significant welfare loss.

      In addition, passive choice will, almost by definition, decrease choosers' feelings of identification with the outcome. In part for that reason, any kind of default rule, including a highly personalized one, may not create the kinds of motivation that can come from active choosing. Suppose that choice architects seek to promote healthy behavior. They might use something akin to default rules of certain kinds (involving, for example, portion size and easy availability of certain foods). Such an approach may be effective, but it may not have certain benefits associated with active choosing, such as increased self-monitoring and stronger intrinsic motivations.

      A separate objection applies to default rules of all kinds: People may like choosing, and default rules deprive them of something that they like. Consider the experience of ordering from a menu at a restaurant. Some people affirmatively favor a situation in which they receive a number of options and can make their own selections from the list. When people like to choose, there is an argument for active choosing and against any sort of default rule. It is natural, and not false, to answer that if they want to choose, they can do so even in the presence of a default. But for many people in many contexts, it is better to be presented with a menu of options, and to be asked for their preference, than to be provided with a default, and to be asked whether they want to depart from it.

    2. Abstractions and Concrete Cases

      These points--about narrowing, regret, chooser...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT