Network neutrality and quality of service: what a nondiscrimination rule should look like.

Author:van Schewick, Barbara
Position:II. Proposals for Nondiscrimination Rules B. All-or-Nothing Approaches 2. Ban All Discrimination through C. Case-by-Case Approaches 4. Problems with Case-by-Case Adjudication c. Limited Ability to Protect Values and Actors That Network Neutrality Rules Are Designed to Protect, p. 37-81
 
FREE EXCERPT
  1. Ban all discrimination

    By contrast, some participants in the debate would ban all discrimination, requiring network providers to treat every packet the same. (106) The FCC's draft nondiscrimination rule in the Open Internet proceeding is an example of this type of approach. (107) A rule that required network providers to treat every packet the same would make it impossible to offer Quality of Service, which, by definition, entails the network treating packets differently. (108)

    Proponents of this option are concerned that network providers may use the provision of Quality of Service as a tool to distort competition among applications or classes of applications. For example, they are concerned that a network provider may offer Quality of Service exclusively to its own applications, but not to other, competing applications, or may sell Quality of Service exclusively to one of several competing applications. (109) They also point out that network providers who offer Quality of Service and are allowed to charge for it have an incentive to reduce the quality of the baseline service below acceptable levels to motivate users to pay for better service. (110) Moreover, selling Quality of Service allows network providers to profit from bandwidth scarcity, which reduces their incentives to increase the capacity of their networks. (111) While these arguments all have merit, these problems can be solved without totally banning Quality of Service. As will be explained below, it is sufficient to constrain how Quality of Service can be offered and charged for. (112)

    Supporters of banning Quality of Service also question whether Quality of Service is needed at all. (113) If there is no need for Quality of Service, then banning it creates limited social costs. (114) So far, proponents of a ban point out, the lack of Quality of Service has not prevented real-time applications from becoming successful on the public Internet. (115) For example, although Internet telephony is sensitive to delay and high variations in delay ("jitter") and may benefit from a network service that provides low delay and low jitter, Internet telephony applications such as Skype or Vonage work in the current Internet. (116) Video telephony applications like Skype or Google Video Chat function over today's broadband connections. (117) The success of real-time applications on today's best-effort Internet is due to two reasons: First, many regions currently seem to have sufficient network capacity to prevent the lack of Quality of Service from becoming a problem. (118) If there is enough capacity so that congestion is generally low, the level of delay will be low enough most of the time to be tolerable for real-time applications. (119) Second, network engineers and application designers have developed end-host-based techniques that allow real-time applications to compensate for the lack of Quality of Service in the network. (120) Pointing to this experience, proponents of a ban argue that capacity increases, combined with end-host-based measures, are sufficient to meet the needs of applications that require low delay or low jitter. (121)

    While available capacity affects the benefits of offering Quality of Service, the relationship between the two is more nuanced than is often assumed. Applications that would benefit from Quality of Service ("QoS-sensitive applications") are sensitive to the increase in delay, jitter, or loss, or to the variation in throughput that arises if queues build up in routers along the application's path, creating congestion. (122) (See Box 6: The Relationship Between Congestion, Delay, Jitter, and Loss below.) A network that offers Quality of Service can "help" these applications by providing classes of service that may offer throughput, delay, loss, or jitter that are better suited to the needs of QoS-sensitive applications than the unpredictable and potentially highly variable throughput, delay, loss, and jitter offered by the best-effort service. (123) Potential classes of service may offer throughput, loss, delay, or jitter that is relatively better than the throughput, loss, delay, or jitter provided by best-effort service during times of congestion (124) or may provide a performance that is more constant and predictable than best-effort service. (125) These services, however, can improve on the performance of best-effort service only if there is congestion. (126) If there is no congestion (i.e., if all queues are empty), congestion-related loss and queuing delay will constantly be zero, jitter will be low for all packets, and data flows will experience the maximum throughput and minimum end-to-end delay that is possible on their path. (127) No class of service can improve on that. Thus, Quality of Service is only useful if there is at least some congestion.

    BOX 6 THE RELATIONSHIP BETWEEN CONGESTION. DELAY, JITTER, AND LOSS Throughout this Part, "congestion" denotes the building up of a queue for an outgoing link at a router, which may increase delay, jitter, or packet loss. (128) (This definition differs from the definition of congestion that is often used by network providers. See Box 7: Definitions of Congestion and Benefits from Quality of Service below.) Data packets travel across the Internet from router to router until they reach their final destination. At each router, packets arrive through incoming links and are transmitted through the appropriate outgoing link that leads to the next Stop--which can be a router or the receiving end host--on their path to their ultimate destination. If packets arrive for transmission over an outgoing link while another packet is being transmitted across that link, they are stored in a queue (or "buffer") for that link until it is their turn to be transmitted. (129) If packets destined for a specific outgoing link arrive faster than they can be transmitted over that link, the number of packets in the queue increases. This may happen, for example, at routers that connect faster incoming links with slower outgoing links, or when different data transfers across the same link coincide. (130) As the number of packets in the queue increases, packets arriving for transmission across that link have to wait longer until they are transmitted, which increases the delay they experience. If the queue is full and cannot accommodate additional packets, the router starts dropping arriving packets, creating packet loss. The end-to-end delay (or "latency") experienced by a packet indicates how long it takes the packet to travel from its origin to its destination. A packet's end-to-end delay consists of a number of components: how long it takes for the packet to be processed by the various routers along its path, how much time the packet spends in router queues waiting to be transmitted (or, in other words, how much congestion the packet encounters along its path), how long the various routers need to transmit the packets onto the appropriate outgoing link, and how long the packet needs to travel along the links from one router to the next. (131) The longer a packet has to wait in one or more router queues along its path, the higher its end-to-end delay. Now consider an application that sends a number of data packets from one end host to another that travel along the same path ("data flow"). If the different packets spend varying amounts of time in router queues along their way, their end-to-end delay will vary. This variation in end-to-end delay is called jitter. (132) If all packets in a data flow have a similar end-to-end delay (e.g., because they all experience no queuing delay, or because all experience a similar, higher queuing delay), jitter is low. By contrast, if the end-to-end delay experienced by packets in the flow is highly variable (e.g., because some packets experience a lot of delay while others experience little delay), jitter is high. BOX 7 DEFINITIONS OF CONGESTION AND BENEFITS FROM QUALITY OF SERVICE Throughout this Part, "congestion" denotes the building up of a queue for an outgoing link at a router, which may increase delay, jitter, or packet loss. (See Box 6: The Relationship Between Congestion, Delay, Jitter, and Loss.) This definition is derived from the definition of congestion used in queuing theory. (133) As explained in the text, Quality of Service only provides an improvement over best-effort service if this type of congestion exists. By contrast, under a definition often used by network providers, congestion occurs if the average utilization of a link over a certain time period exceeds a certain threshold. (134) While Quality of Service is useless in a network that never experiences congestion under the definition used throughout this Part, it may still be useful in a network that is not congested under the definition used by network providers. Even in a network with low average utilization, queues will build up occasionally. (135) Thus, a network that is not congested under the definition used by network providers may experience congestion under the definition used throughout this Part and may therefore benefit from Quality of Service. As a result, the statement "Quality of Service is only useful if there is congestion" is correct only under this Part's definition of congestion, but is false if the term "congestion" is used according to the network providers' definition. In a network where average utilization is high, congestion will occur often and for extended periods of time. During periods of extended congestion, QoS-sensitive applications may become effectively unusable with best-effort service and may require a different class of service to function satisfactorily. (136) In such a network, users may find Quality of Service very valuable and may be very willing to pay for it. (137)

    Adding capacity to reduce average utilization will reduce the amount of congestion. If average utilization is low, congestion will tend to occur less often and may cause...

To continue reading

FREE SIGN UP