Hardware-based ID, rights management, and trusted systems.

AuthorWeinberg, Jonathan

Networked digital technology enables the easy and inexpensive movement of speech and information among persons who are strangers to one another. People may not always find unfettered movement desirable, though. A content producer may want to disseminate information to some recipients, such as those who have paid, but not others. A parent may wish that the computers in her home not display certain sexually explicit content offered by a willing speaker. A government might wish that certain speech be available to some recipients but not others, for example, that gambling solicitations be inaccessible in specific geographic jurisdictions.

In order to determine whether particular content should be disseminated to a particular recipient, the decisionmaker must have information about both the content and the recipient. To block content requiring payment to persons who have not paid, one needs to know both whether the requested document requires payment and whether the requester has paid; to block sexually explicit content to minors whose parents wish to shield them from such material, one needs to know both whether the particular document contains proscribed content and whether the requester is such a minor; to block gambling solicitations to persons in certain geographic jurisdictions, one needs to know both whether the particular document is a gambling solicitation and the geographic location of the requester. As Larry Lessig and Paul Resnick point out, this can present a difficulty: No one actor, at the outset, may possess all of that information.(1)

Various techniques are available to overcome that difficulty. One approach is filtering. A parent concerned about her child's access to sexually explicit speech knows something about her child's characteristics, but little about the contents of each of the Web sites her child might visit. Filtering systems attempt to solve that problem by generating information about those web sites, so that the parent, armed with both the characteristics of the speech and those of her child, can make blocking decisions based on her own policy preferences. However, the filtering enterprise turns out to be problematic in several respects. One problem lies in the difficulty of collecting accurate and nuanced information about myriad Interact speech resources; another lies in the difficulty of vindicating users' individual policy preferences via off-the-shelf software. Entry barriers and informational costs mean that only a few firms can manage to tag huge numbers of Web resources; consumers, in turn, do not know and have difficulty evaluating the evaluators' substantive criteria.(2) As a result, although filtering in theory diffuses power among hosts of information recipients, to some extent it concentrates power in the third-party ratings providers.(3)

Filtering, though, is not the only possible response to this sort of difficulty. Another approach, as Lessig and Resnick point out, is to set up Internet architecture to require the would-be recipient of speech to transmit information about her characteristics back to the content provider, so that the provider can then make content dissemination choices according to its policy preferences or binding legal rules.(4) Increasingly, firms interested in commerce in information goods are designing structures to enable that process.

This paper examines the implications of different choices for managing the information flow from consumers to content providers. In particular, it focuses on the implications of an Internet architecture that identifies each consumer by a single unique identifier that can be tied to the consumer's real-world identity and that is available to a wide range of applications and content providers. Such a system can be implemented through hardware-based identification like Intel's Processor Serial Number (PSN). It allows the content provider easily to identify the consumer originating any given packet stream and to correlate incoming payment and other information to the outgoing information and entertainment that the content provider releases to that consumer: All of the data is simply filed under the consumer's unique ID. That architecture stands in contrast to one in which content providers use more sophisticated cryptographic techniques to assign consumers identifiers that cannot be linked to their real-world identities or their activities in other contexts. Those techniques would protect consumers' choices from disclosure and would preclude the assembly of dossiers on particular individuals.

Technologies involving the assignment of user or platform identifiers, enforced through hardware-based user identification such as the PSN, can give providers of information goods extensive new capabilities. Such technologies provide an easy and straightforward way for publishers to verify the authenticity of messages claiming authorization to receive digital works, giving them greater ability to limit availability of their works to folks who meet certain criteria. The technology dovetails with the use of trusted systems, allowing content providers to prevent recipients from passing usable copies of the work to anyone who has not paid the content provider, and giving content providers flexibility in specifying the nature of the event that will trigger a payment obligation.

These technologies, though, have other consequences as well. The most obvious relate to privacy: Trusted systems relying on transparent unique identifiers, and in particular systems built around the PSN, threaten to sharply diminish anonymity and informational privacy on the Internet. They raise the prospect that a much larger proportion of ordinary transactions will require consumers to present unique identification numbers digitally linked to a wide range of personally identifiable information. They are well-suited to across-the-board use by a large number of unrelated information collectors, increasing the ease with which a wide range of information about a person can be aggregated into a single overall dossier.

Moreover, the combination of trusted-systems technology that enables publishers to ensure that speech released to one consumer does not make its way via sharing or secondary markets to another, and the privacy effects of allowing publishers to collect extensive individualized information on consumers, will likely affect the economics and politics of speech markets. It will sharply enhance producers' ability to discriminate among individual consumers, on price and other grounds, in connection with the sale and marketing of information goods. Some commentators suggest that this concentration of control is a good thing because the price discrimination it enables will broaden distribution of information goods.(5) Yet the benefits of such a system are clouded; any increase in distribution due to price discrimination comes at the cost of shutting down the distribution that comes, in today's less-controlled system, through sharing or secondary markets. It will likely be accompanied by increased media concentration and a self-reinforcing cycle of commercial pressure on individual privacy.

Publishers can get the benefits of trusted systems without these socially undesirable consequences by relying on identification techniques that assure consumers a greater degree of privacy. Building trusted systems around hardware-based consumer identifiers therefore not only carries with it a dystopian future of universal personal monitoring and identification, but also is unnecessary to meet publishers' legitimate needs.

In Part I of this paper, I explore the market incentives for the widespread deployment of systems under which information flows from consumers to content providers. In Part II, I discuss the blend of anonymity and identifiability presented by current Internet architecture, and in Part III, I focus on a particular technology--the Processor Serial Number built into the Intel chips powering most computing devices today. In Part IV, I discuss the implications of such technology for privacy and the economics and politics of communications markets. Unique identifiers, and their associated technology, promise to give content providers vastly expanded powers to discriminate among consumers by setting prices on an individual basis and by picking and choosing who will be allowed to view or read particular works. Although some argue that these would be positive developments, I submit that they are, on balance, unfortunate. In Part V, I note that the negative consequences of this technology are avoidable: Content providers could rely on more sophisticated cryptographic techniques to manage access to their information goods. Such systems would allow content owners to exploit their intellectual property, but would avoid the consequences described in this paper.

  1. RIGHTS MANAGEMENT AND TRUSTED SYSTEMS

    The most important concern driving the information flow from consumers to content providers relates to rights management. The term "rights management" is commonly associated with the protection of intellectual property rights, but it need not be so limited. One can think of rights management as covering any technological means of controlling public access to, and manipulation of, digital resources. That sort of control is basic to any system of networked computing. At the heart of Unix, for example, is the concept of permissions, that define which users on a network can take what actions (read, write, execute) on which files and directories.(6) Networking would not be very practical without a way of defining and limiting the set of people who can have access to particular documents and other network resources. Rights management techniques, in that sense, are simply a form of network security.

    Those techniques demand a reliable way to match usernames with real-world individuals. After all, it is the individual, not the username, whose access to files is at...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT