Wireless efficiency versus net neutrality.

AuthorJackson, Charles L.
PositionRough Consensus and Running Code: Integrating Engineering Principles into the Internet Policy Debates
  1. INTRODUCTION II. CONGESTION IN THE INTERNET A. Controlling Internet Congestion 1. Internet Congestion Control on the Honor System 2. More Recent System Collapses 3. Use of Established Congestion-Avoidance Technologies 4. Security B. Impacts of Eliminating ISPs' Congestion Control Tools III. WIRELESS NETWORKS AND NETWORK NEUTRALITY A. Priority Routing Expands Capacity B. Priority in the Backhaul Network 1. Separation of Control Signaling and User Information 2. Converged Networks 3. Network Neutrality and Backhaul Networks C. Cross-Layer Design D. Efficiency E. Handset Attributes and System Capacity 1. Receiver Sensitivity 2. Vocoder Performance 3. Other Handset Attributes That Affect System Capacity 4. Handset Attributes and Service Quality 5. Poor Handsets or Poor Networks? 6. Network Standards Evolution IV. SCHEDULING AND PRIORITY ROUTING IN SATELLITES, ELECTRICITY, AND WIRELESS V. CONCLUSION I. INTRODUCTION

    Almost all systems in the world have limited capacity. Nature makes the capacity of systems variable, despite the best efforts of their designers and operators; they are best modeled as a random quantity. Consider the capacity of the airways between Washington, D.C., and New York. Although there is an upper limit set by the capacity of the airports at each end, weather often reduces capacity well below that upper limit. The supply of electricity also fluctuates. Generators and transmission lines fail; fiver flows and winds vary. The capacity of some geostationary communications satellites comes in physical units called transponders, which can fail unexpectedly. The electrical power industry and the satellite industry have developed a variety of priority mechanisms to deal with such fluctuations.

    Wireless networks and the Internet face similar limits. Equipment failures and fluctuating demand can result in situations in which users try to transmit more traffic than the network can carry. As described, one response to such overload in electricity and satellite communications is to give preferential treatment to one type of use or class of customers in order to match demand with capacity. There are currently a variety of policy proposals for wireless and Internet communications, referred to under the broad term network neutrality, that propose to prohibit or limit such preferential treatment when traffic overloads occur. This Article reviews congestion and interconnection issues in the Internet and wireless networks, and points out a number of ways in which such limits on preferential treatment could harm consumers.

    This Article first reviews congestion and congestion control in the Internet; second, the Article turns to wireless networks and shows that in addition to congestion issues, priority routing in wireless can make available capacity that would otherwise go unused.

    Policies that facilitate the wider availability and adoption of broadband access to the Internet promote a wide variety of public interest objectives, including jobs, safety of life, and quality of life. Conversely, restrictive regulations tie the hands of network engineers and managers, and prevent continued innovation that would make broadband networks less robust, less useful, and less secure. In addition, such regulations deny consumers certain services that may be effectively precluded in the absence of particular forms of network management. The successful operation of a broadband network requires considerable attention by network operators to many significant background details, such as protecting against security threats, controlling congestion, and making sure that delay-sensitive applications like VoIP and interactive games perform well. Allowing providers the flexibility to employ the tools and practices that most effectively address these concerns benefits all broadband consumers.

  2. CONGESTION IN THE INTERNET

    Congestion has long been a real problem for the Internet. Priority routing can, among other things, be an effective tool for controlling and minimizing the harms of congestion. Giving one class of traffic priority over another can substantially reduce the harms from congestion by enabling latency-sensitive applications that would fail in the absence of network management. Moreover, in the wireless world, giving some traffic priority over others permits expanding capacity without imposing significant costs.

    This Article discusses congestion control in the Internet as it has been practiced in the past and as it is practiced today. It also describes recent incidents of system collapse and how blocking low-priority traffic was a key factor in recovering from such collapses. The Article concludes that congestion controls within the network---congestion controls that do not treat each packet equally---offer substantial benefits for consumer welfare and public safety. In this context, the Article describes how certain tools, technologies, and congestion control techniques--including packet inspection technologies--though criticized by some, (1) can provide highly effective defenses against network attacks, in particular against denial-of-service attacks.

    As this discussion will show, imposing any form of a rule that prohibits any differential treatment or handling of different packets would create substantial efficiency losses by prohibiting the use of technologies that expand capacity, protect against congestion, and enable services or applications that would otherwise not function effectively. Such a rule would also make broadband networks less robust and less secure than they would otherwise be.

    1. Controlling Internet Congestion

      Congestion in the Interact is not merely a theoretical concern--it has long presented a real-world challenge for network engineers. A famous paper by Van Jacobson and Michael Karels describes several congestion collapses of the Internet. (2) The development of effective congestion control mechanisms was a key step in developing the modem Internet. Unfortunately, the primary congestion control mechanisms in today's Internet depend on the honor system for their effective operation.

      Incompetent or malicious programmers may subvert the honor system and set the stage for congestion failures. Happenstance, malicious acts, or equipment failure may also lead to congestion failures. Congestion is not just a problem of the 1980s, as evidenced by more recent system collapses. The early Interact suffered a series of congestion collapses in the mid1980s. (3) The collapses arose from a simple cause--users were transmitting more data on some paths than the paths could handle. Router queues would fill up, and subsequently arriving packets would be discarded. User machines would retransmit the lost packets, and congestion would continue. The Internet congestion was like the Beltway in Prince George's County after a Washington Redskins home game--except for the retransmissions. (4)

      1. Interact Congestion Control on the Honor System

        In 1993, researcher Van Jacobson of Lawrence Berkeley Laboratory described the congestion problem and the solution that he and his coworkers developed:

        "If too many people try to communicate at once," explains Jacobson, "the network can't deal with that and rejects the packets, sending them back. When a workstation retransmits immediately, this aggravates the situation. What we did was write polite protocols that require a slight wait before a packet is retransmitted. Everybody has to use these polite protocols or the Internet doesn't work for anybody." (5)

        Substantial thought and research went into developing congestion control mechanisms that have been embedded in TCP implementations. Although these methods are complex and subtle, the basic idea is simple: if a server or user terminal senses that the network seems to be losing packets, the server or user terminal should cut back sharply the rate at which it is transmitting data. Putting congestion control in the user devices at the edge of the network made sense for many reasons, and over the next few years, TCP implementations included congestion control features and such congestion failures became far rarer and more localized. (6)

        It is, however, widely recognized that the fundamental problem still remains. There is finite capacity at every point in a network Consider automobiles arriving at an intersection of a north-south and an east-west hightway. If heavy traffic from the north, east, and west all tries to go south, the southbound road will be unable to carry the traffic and a traffic jam will ensue. Similarly, if the flow of packets arriving at a point in the Internet exceeds the traffic that can flow away from that point, some packets must be discarded. Furthermore, today's Internet congestion control works mostly on the honor system. Windows, Linux, and the Apple operating systems all come with TCP congestion control built in, but users can install software that violates (or at least abuses) the honor system. (7)

        Claiming that congestion control on the Internet works on the honor system is not merely a metaphor--it is a statement of fact. Users' systems must act altruistically, sacrificing their network service for the greater good, in order for these congestion control approaches to be effective. The Internet standards body, the Internet Engineering Task Force (IETF), in its May 2009 publication, made this point:

        In the current Internet architecture, congestion control depends on parties acting against their own interests. It is not in a receiver's interest to honestly return feedback about congestion on the path, effectively requesting a slower transfer. It is not in the sender's interest to reduce its rate in response to congestion if it can rely on others to do so. Additionally, networks may have strategic reasons to make other networks appear congested. (8) A recent textbook made much the same point: "it is possible for an ill-behaved source (flow) to capture an arbitrarily large fraction of the network...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT