The end-to-end argument and application design: the role of trust.

Author:Clark, David D.
Position:Rough Consensus and Running Code: Integrating Engineering Principles into the Internet Policy Debates
  1. INTRODUCTION A. What Is an End Point? II. RELIABILITY AND FUNCTION PLACEMENT A. Application-Specific Semantics III. THE CENTRALITY OF TRUST A. Multiple Stakeholders B. "Good Guys" and "Bad Guys" IV. THE NEW END-TO-END A. Trust Options for the Individual End Node B. Delegation of Function C. Mandatory Delegation D. When End Users Do Not Trust Each Other V. THE ULTIMATE INSULT A. Can We Take Back the End Node? VI. DESIGN FOR DELEGATION VII. REINTERPRETING THE END-TO-END ARGUMENT VIII. CONCLUSIONS I. INTRODUCTION

    Applications are the raison d'etre of the Internet. Without e-mail, the Web, social media, VoIP and so on, the Internet would be (literally) useless. This fact suggests that the structure of applications, as well as the structure of the Internet itself, should be a subject of study, both to technologists and those who are concerned with the embedding of the Internet in its larger context. However, the Internet, as the platform, may have received more attention and analysis than the applications that run on it.

    The original end-to-end argument (1) was put forward in the early 1980s as a central design principle of the Internet, and it has remained relevant and powerful as a design principle, even as the Internet has evolved. (2) However, as we will argue, it does not directly speak to the design of applications. The original end-to-end paper poses its argument in the context of a system with two parts, the communications subsystem and "the rest." (3) That paper says: "In a system that includes communications, one usually draws a modular boundary around the communication subsystem and defines a firm interface between it and the rest of the system." (4) Speaking generally, what the end-to-end argument asserts is that application-specific functions should be moved up out of the communications subsystem and into "the rest" of the system. But the argument, as stated, does not offer advice about how "the rest" should be structured. That paper equates the "rest of the system" with the application, and the application with the end points. It says: "The function in question can completely and correctly be implemented only with the knowledge and help of the application standing at the end points of the communication system. Therefore, providing that questioned function as a feature of the communication system itself is not possible." (5)

    Applications and services on the Internet today do not just reside at the "end points"; they have become more complex, with intermediate servers and services provided by third parties interposed between the communicating end points. Some applications such as e-mail have exploited intermediate servers from their first design. E-mail is not delivered in one transfer from original sender to ultimate receiver. It is sent first to a server associated with the sender, then to a server associated with the receiver, and then finally to the receiver. By one interpretation, all of these intermediate agents seem totally at odds with the idea that function should be moved out of the network and off to the end points. In fact, the end-to-end argument, as described in the original paper, admits there are interpretations that are diametrically opposed. When we consider applications that are constructed using intermediate servers, we can view these servers in two ways. An Internet purist might say that the "communications subsystem" of the Internet is the set of connected routers; servers are not routers, but are connected to routers; as such, servers are outside the "communications subsystem." This reasoning is compatible with the end-to-end argument of placing servers anywhere in "the rest" of the system. On the other hand, these servers do not seem like "ends," and thus they seem to violate the idea of moving functions to the ends. These issues are prominent today, thanks to the emergence of cloud computing--which involves specific sorts of servers--and the tendency of some popular discourse to treat "the cloud" as a new incarnation of the Internet itself. (6)

    The original end-to-end paper, because it uses a simple two-part model of the communications subsystem and "the rest," does not directly speak to the situation where "the rest" has structure. The purpose of this Article is to offer an interpretation of the end-to-end argument, drawing on the original motivation and reasoning, that is applicable to today's application design and today's more complex world of services and service providers.

    1. What Is an End Point?

    Part of the definitional problem, of course, is to define the end point. There is an intuitive model that is often adequate: if computer A is sending a file to computer B (to use the example of "careful file transfer" from the original paper (7)), then A and B are end points. However, they are end points in two ways that are subtly different. In the original example, the end points are the literal source and destination of the data being sent across the communications subsystem. They are also the end points in that they are the prime movers in the activity--they are directly associated with the principals that actually wanted to accomplish the action. Intermediate nodes, whether at the packet level or application service level, seem to play a supporting role, but they are not the instigators of the action, or the nodes that wanted to see it accomplished.

    The original paper provides a hint as to the importance of this distinction. Using a telephone call as an example, it points out that the ultimate end points are not the computers, but the humans they serve. (8) As an illustration of human-level end-to-end error recovery, one person might say to another: "[E]xcuse me, someone dropped a glass. Would you please say that again?" (9) The humans are the prime movers in the activity, the ultimate end points. The computers are just their agents in carrying out this objective.

    In the case of a phone call, the humans and the computers are colocated. It makes no sense to talk about making a phone call unless the person is next to the phone. So one can gloss over the question of where the human principal is. But in the case of careful file transfer, the location of the person or persons instigating the action and the location of the computer end points may have nothing to do with each other. As an example, there might be one person, in (say) St. Louis, trying to do a careful file transfer from a computer in San Francisco to a computer in Boston. Now, what and where are the end points?

    The person in St. Louis might undertake a careful file transfer in three stages. First, she might instruct the computer in San Francisco to compute a strong checksum of the file (i.e., a measure of the bits it contains) and send it to her in St. Louis. Then she might instruct the two computers to carry out the transfer. Third, the person might instruct the computer in Boston to compute the same strong checksum and send it to St. Louis, where she can compare the two values to confirm that they are the same. In this case, the computers in San Francisco and Boston are the end points of the transfer, but they seem just to be agents (intermediaries) with respect to the person in St. Louis. With respect to the instigation of the transfer, there seems to be one principal (one end point) located in St. Louis.

    It might seem that this example serves to further confuse the story, rather than clarify it. But if we explore one step deeper, we can begin to find some clarity. The example above, building on the example in the original paper, referred to the overall activity as "careful file transfer." It is important to ask, why is that sequence of steps being careful? It is careful only in the context of an assumed failure mode--that is, loss or corruption of information during transfer. But why does the end user assume that the computation of the checksum will not fail? Why does the end user assume that the checksum returned by the computer is actually the checksum of the file, as opposed to some other value? Why does the end user assume that the file transferred today is the same as the file stored earlier? Why does the end user assume that the file will still be there at all? A prudent end user would be careful about these concerns as well. Perhaps the file was copied to Boston because the computer in San Francisco is crash prone or vulnerable to malicious attack. Perhaps this move was part of a larger pattern of "being careful." Perhaps, in a different part of the story, the end user in St. Louis has the computer in San Francisco compute the strong checksum on multiple days and compares them to see if they have changed. All of these actions would represent "being careful" in the context of some set of assumed failures.

    But if there is no part of the system that is reliable, being careful is either extremely complex and costly, or essentially impossible. For example, the end user cannot protect against all forms of failure or malice using the comparison of strong checksums, because it may not be possible to detect if one of the computers deliberately corrupts the file but returns the checksum of the correct version. Ultimately, being careful has to involve building up a process out of component actions, some of which have to be trustworthy and trusted.


    The example of careful file transfer in the original paper can help us to explore the relevance of the end-to-end argument to today's world. It points to the need to define what it means to be careful in a more general sense. Being careful implies making a considered and defensible judgment about which parts of the system are reliable and which parts are failure prone or open to malicious attack--being careful today implies a degree of risk management. Using careful design implies constructing a set of checks and recovery modes that can compensate for the unreliable parts. The end user in St. Louis, moving a file from...

To continue reading