Protocol layering and Internet policy.

Author:Yoo, Christopher S.

An architectural principle known as protocol layering is widely recognized as one of the foundations of the Internet's success. In addition, some scholars and industry participants have urged using the layers model as a central organizing principle for regulatory policy. Despite its importance as a concept, a comprehensive analysis of protocol layering and its implications for Internet policy has yet to al)pear in the literature. This Article attempts to correct this omission. It begins with a detailed description of the way the five-layer model developed, introducing protocol layering's central features, such as the division of functions across layers, information hiding, peer communication, and encapsulation. It then discusses the model's implications for whether particular functions are performed at the edge or in the core of the network, contrasts the model with the way that layering has been depicted in the legal commentary, and analyzes attempts to use layering as a basis for competition policy. Next the Article identifies certain emerging features of the Internet that are placing pressure on the layered model, including WiFi routers, network-based security, modern routing protocols, and wireless broadband. These developments illustrate how every architecture inevitably limits functionality as well as the architecture's ability to evolve over time in response to changes in the technological and economic environment. Together these considerations support adopting a more dynamic perspective on layering and caution against using layers as a basis for a regulatory mandate for fear of cementing the existing technology into place in a way that prevents the network from innovating and evolving in response to shifts in the underlying technology and consumer demand.

INTRODUCTION I. THE CONCEPTUAL UNDERPINNINGS OF PROTOCOL LAYERING A. Modularity Theory B. Peer Communication and Encapsulation C. The Tradeoffs Inherent in Protocol Layering II. THE INTERNET AS AN EXAMPLE OF A LAYERED ARCHITECTURE A. Connecting Heterogeneous Hosts B. Interconnecting Heterogeneous Transmission Technologies C. The TCP/IP Reference Model The Application Layer The Transport Layer 3. The Network Layer 4. The Data-Link Layer 5. The Physical Layer D. Layering's Implications for Where Functions Are Performed III. CHARACTERIZATIONS OF THE LAYERED MODEL APPEARING IN THE LEGAL LITERATURE A. Combining the Transport and Network Layers into a Single Layer B. Dumb Pipes vs. the Hourglass Model C. Layering and Competition Policy IV. THE IMPACT OF TECHNOLOGICAL CHANGE ON THE LAYERED MODEL A. Reliability B. Congestion C. Distributed Optimization 1. Aggressive TCP Implementations a. Refusal to Back Off in the Face of Congestion b. Multiple TCP Sessions c. Autotuning 2. Simultaneous Optimization 3. Other Considerations D. Security CONCLUSION INTRODUCTION

One of the most striking developments of the past two decades is the emergence of the Internet both as the dominant medium of communication and as a dynamic engine of innovation. Policymakers and commentators typically attribute the Internet's success to key architectural principles incorporated into its design. (1) Among the most frequently cited of these principles is a concept known as protocol layering, (2) which was first developed for the International Standards Organization's (ISO) Open System Interconnection (OSI) Reference model during the late 1970s. (3) Layering has become so widely accepted that it now represents the central framework around which most textbooks on network engineering are organized. (4) Indeed, belief in the layered model has become so strong that it is often widely regarded as the "proper" way to modularize a network. (5)

There is widespread agreement that the incorporation of protocol layering into the Internet's architecture has yielded substantial benefits. Layering allows those working on one layer to ignore most of the inner workings of the other layers, which reduces coordination costs and accelerates product development times by permitting parallel testing and innovation. Layered architectures also provide a stable configuration of network resources and interfaces around which actors can focus their efforts. The current architecture has also proven incredibly resilient. Despite originally being designed for a much smaller scale and a more limited technological and economic context, the Internet now integrates a larger number and greater variety of uses and technologies than its designers ever imagined. (6)

Protocol layering has also found its way into discussions of Internet policy. (7) Early commentators offered it as a technologically agnostic alternative to the regime established by the Communications Act of 1934, (8) which subjected communications to distinct regulatory regimes based on whether they were transmitted over telephone wires, coaxial cables, or spectrum. (9) Others argued that the layered model remained properly agnostic about the content of the rules, but argued that problematic practices arising in one layer be addressed only through regulations directly targeted at that layer--rather than through regulations designed to curb that behavior by targeting another layer or the system as a whole. (10)

Other analyses have drawn stronger policy inferences from the layered model. For example, some commentators argued that the layered model could support competition policy by providing "natural boundaries" for defining markets, (11) noting that each layer is subject to different sources of market power. (12) Others went further, suggesting that the economics of the lower layers made them particularly susceptible to market power, although they acknowledged the possibility of deregulating the lower layers once they became more competitive. (13) Others argued that layering promotes "fair and open competition" among providers offering services at each layer. (14) Still others equated layering with innovation (15) and advocated regulations mandating that the interfaces between layers remain open. (16)

More recent analyses have relied on the existing layered architecture as the foundation for proposals to implement the Open Internet Order. For example, some commentators argue for using consistency with the existing layered architecture as the first screen for determining whether a traffic management practice is reasonable, (17) a position endorsed by certain policy advocates. (18) Others advocate a nondiscrimination rule that maps onto the layered architecture, arguing that lower layers should be forbidden from discriminating on the basis of any information contained in the upper layers. (19)

The Internet's success should not obscure, however, that every architecture necessarily has limitations as well as strengths. David Clark, who was the chief protocol architect for the ARPANET during the 1980s, offered some observations about implicit tradeoffs in layering that, despite being made with respect to an earlier architecture and concerns that did not fully mature, still reflect some basic insights. Clark recognized that the centrality of layering in the engineering literature "tends to suggest that layering is a fundamentally wonderful idea which should be a part of every consideration of protocols." (20) Such a perspective overlooks the fact that layering provides "both a benefit and a penalty." (21) While "[a] visible layer boundary, with a well specified interface, provides a form of isolation between two layers" that permits modifications to one layer without interfering with other layers, "a firm layer boundary almost inevitably leads to inefficient operation." (22) Hiding much of the technical complexity behind layer boundaries prevents other layers from taking advantage of the full functionality of the underlying technology, which in turn increases the resources needed to perform the desired task. (23) Thus, the "tempt[ation] to think that a layer boundary ... is in fact the proper boundary to use in modularizing the implementation" is "a potential snare." (24) The tradeoff between generality and efficiency is "rarely acknowledged in the computing literature," however. (25) A small but important body of work exists in the engineering literature exploring how protocol layering can harm network performance. (26)

In addition to its impact on efficiency, protocol layering can also have an adverse impact on innovation that is often overlooked. Although protocol layering promotes innovations that are consistent with the architecture, at the same time it impedes innovations that are inconsistent with the design hierarchy. (27) Moreover, any changes that require a reconfiguration of the design hierarchy require coordinating with actors operating at the layers both above and below the locus of the innovation, which makes such innovations all the more difficult to implement.

The existing policy debate based on protocol layering largely ignores the extent to which it is something of a mixed blessing from the standpoint of innovation. On the one hand, to yield any benefits, an architecture must be relatively stable and change only rarely. (28) Indeed, the natural temptation for computer scientists to optimize for particular applications (29) or to redesign the entire system from scratch means that any calls for a fundamental redesign of the entire architecture should be greeted with a healthy amount of skepticism. (30)

On the other hand, to say architectural changes should be infrequent is not to say that they should never occur. Even the strongest proponents of the layered model recognize that the architecture can and should evolve over time. (31) Major changes transforming the Internet environment--including the growing heterogeneity of end users, the advent of Internet-based video and cloud computing, and the emergence of wireless broadband and the smartphone operating system as the relevant platforms--raise the possibility that circumstances may have...

To continue reading