Rough consensus and running code: integrating engineering principles into internet policy debates.

AuthorYoo, Christopher S.
  1. TUTORIAL II. THE CONTINUING DEBATE OVER NETWORK MANAGEMENT AND QUALITY OF SERVICE III. CHANGING TECHNOLOGY AND THE LIMITS OF THE LAYERED AND END-TO-END MODELS IV. ARCHITECTURE AND NETWORK SECURITY V. KEYNOTE ADDRESS BY PAUL MOCKAPETRIS VI. NEW APPLICATIONS, NEW CHALLENGES VII. THE FUTURE Is WIRELESS On May 6-7, 2010, the University of Pennsylvania's Center for Technology, Innovation and Competition hosted the conference, "Rough Consensus and Running Code: Integrating Engineering Principles into the Internet Policy Debates." (1) This conference brought together members of the engineering community, regulators, legal academics, and industry participants in an attempt to provide policymakers with a better understanding of the Internet's technical aspects and how they influence emerging issues of broadband policy.

    At various points during the recent debates over broadband policy, observers both inside and the outside the government have acknowledged that the debate has yet to reflect a full appreciation of the engineering principles underlying the Internet and the technological opportunities and challenges posed by the existing architecture. The level of discourse is reminiscent of the days when economic arguments first began to be advanced in during regulatory proceedings, when participants in policy debates lacked a sufficient vocabulary and an understanding of the underlying intuitions to engage in a meaningful discourse about the relevant insights.

    The conference's title, "Rough Consensus and Running Code," (2) also emphasizes that network engineering has long been a pragmatic rather than a theoretical discipline that does not lend itself to abstract conclusions. Network engineers recognize that there is no such thing as the perfect protocol. Instead, optimal network design varies with the particular services, technologies, and flows associated with any particular scenario. In other words, network engineering is more about shades of gray than absolutes, with any solution being contingent on the particular circumstances and subject to change over time as the underlying context shifts. Policymaking is better served by an understanding of the relevant tradeoffs than by categorical endorsements of particular architectural structures as being the foundation for the Internet's success.

    Another side effect of the lack of technical sophistication in the current debate is a tendency to defer to opinions advanced by leading members of the engineering community. People without technical backgrounds often regard strong statements of scientific conclusions as possessing a high degree of conclusiveness. Yet anyone who reads broadly in the technical literature quickly realizes that members of the engineering community often disagree sharply over the best way to move forward and that many seemingly authoritative declarations are actually positions in technical debates that are hotly contested and still ongoing. Just as in economics and law, where there are often as many different positions as there are people offering opinions, so too in network engineering. At the same time, many areas over which policymakers are now struggling are regarded by the engineering community as completely uncontroversial and long settled.

    Understanding how technical considerations should influence Internet policy thus requires a better understanding of the principles on which the Internet is based and an appreciation of the current areas of agreement and dispute within the engineering community. Toward this end, the conference program brought together engineers representing the full range of views on various issues currently confronting policymakers, as well as industry participants who have actual experience in deploying and running networks.

  2. TUTORIAL

    The conference began with a tutorial designed to provide an introduction to the basic engineering concepts underlying the Internet and to provide a flavor of the tradeoffs underlying the architectural choices. Major topics included the differences between host-to-host protocols, such as the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP); the edge-based approach currently used to manage network congestion, known as Additive Increase Multiplicative Decrease (AIMD); the deployment of active queue management techniques such as Random Early Discard (RED); the role of Classless Inter-Domain Routing (CIDR) to solve emerging routing problems; the challenges posed by network address translators (NATs); the role of the Border Gateway Protocol (BGP) in routing traffic; and the history of scheduling through techniques such as Integrated Services (IntServ), Differentiated Services (DiffServ), MultiProtocol Label Switching (MPLS), Explicit Congestion Notification (ECN), and emerging techniques such as Low Extra Delay Background Transport (LEDBAT). It offered some observations about current demands that the Internet is not designed to perform well, such as cost allocation, efficiency, security, mobility, and multicasting. It also offered some examples of how architectural decisions that are locally rational can create unexpected and potentially problematic interactions as traffic scales.

  3. THE CONTINUING DEBATE OVER NETWORK MANAGEMENT AND QUALITY OF SERVICE

    Over the past two decades, some engineers have proposed a series of enhancements to the Internet's architecture to provide more reliable quality of service than the current "best efforts" architecture permits. (3) Other engineers believe that instead of deploying new forms of network management, the better solution is simply to add more capacity. (4) This panel reexamined this debate in light of recent changes to the technological and competitive environment.

    David Clark, who served as DARPA's chief protocol architect during the 1980s and currently serves as senior research scientist at the Computer Science and Artificial Intelligence Laboratory at MIT, expressed annoyance that the term "management" had been co-opted in the current debate, given that networks have always been managed. He also criticized the term "network neutrality" given that the Internet is not now and never has been neutral. (5) Instead, the issue is how to manage scarcity, which leads to congestion. Interestingly, the latency that degrades the performance of many time-sensitive applications is often caused by routers deployed by end users in their home networks (a phenomenon called "self congestion") in ways that is alleviated, but not eliminated, by increasing the bandwidth of the access link. It can also arise in other locations on a steady state or intermittent basis. Clark also indicated that concerns about strategic uses of discrimination to create artificial scarcity are overblown, in part because network providers do not need quality of service (QoS) techniques to create scarcity and in part because providing QoS would help innovation. The QoS techniques designed into the protocols that run the Internet ensure that decisions about prioritization are made by end users rather than network operators.

    Deke Kassabian, senior technology director for networking and telecommunications at the University of Pennsylvania, described how network architectures of large research universities are designed. Penn ensures that its user community has flexible and affordable access to network capacity by maintaining a private line connection to the nearest carrier hotel, where it can obtain easy access to a wide variety of service providers. In terms of performance management, Penn's basic approach is to add bandwidth rather than actively manage QoS. Penn does engage in some bandwidth management, however, by limiting students' Internet access on a per-address basis as well as capping the total amount available to students. Penn occasionally protects other users by limiting the bandwidth consumed by major research projects, sometimes diverting network intensive research projects onto Internet2's Interoperable On-demand Network (ION), which can establish dedicated circuits on a temporary basis. (6) In terms of security, rather than relying on a border firewall, Penn minimizes the impact on other users by deploying security as close as possible to the asset being protected through hardened server configurations, dedicated firewalls in front of a...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT