IP QoS

QoS is one of the most important issues in networks in general, and particularly so in the Internet and other IP networks. QoS deals with the strict management of traffic such that guarantees can be made and SLAs between customers and service providers can be observed. In the case of packet switching, QoS basically guarantees that a packet will travel successfully between any two points. QoS is also of concern in circuit-switched networks, such as the PSTN, where the demands of real-time voice impose conditions that need to be tightly controlled, including availability, latency, and control of noise. In packet-switched networks, other parameters need to be controlled in order to guarantee QoS, including latency end to end (i.e., from entry to exit point), jitter (i.e., the variation in delay between any two points), loss (i.e., dropped packets), sequencing (i.e., the order of delivery of the packets), and errors (i.e., the result of various impairments that affect transmission).

When the Internet was first conceived of and IP protocols were initially created, QoS was not seen as a critical feature. In fact, the Internet was built on the principle of best effort. When the traffic flows consisted of simple, bursty data, the economies of best effort were an advantage. But today, with the introduction of more and more real-time traffic, such as VoIP or video, and interactive applications, QoS is a requirement that cannot be ignored. In fact, services cannot be offered until QoS can be guaranteed, and best effort is simply not good enough. As a result, much attention is being focused on developing QoS mechanisms and protocols that enable packet-switched networks to properly accommodate the demanding needs of the emerging era of multimedia applications and services.

QoS Mechanisms

There is a growing requirement to meet or exceed expectations of end users and applications communicating over a packet-switched network. To fulfill this requirement, we can take several approaches: We can overprovision networks so that bandwidth exceeds the demand and there are enough routers to always ensure available capacity, we can use traffic engineering to steer away from congestion, and we can use fancy queuing so that where there is contention for bandwidth, demand exceeds supply. This final approach requires QoS mechanisms.

Figure 8.22 shows the four main QoS mechanisms: classification (used for packet identification), conditioning (used for traffic shaping), queue management (used to manage the queue depth), and queue scheduling (used for packet scheduling).

Figure 8.22. IP QoS mechanisms

Classification identifies packets for subsequent forwarding treatment. Classification is performed in routers or hosts, and it is combined with other actions. It is based on one or more fields in the packet header, the payload contents, and the input interface. Classification is generally done in hardware at line rate.

Conditioning involves policing and shaping. Policing checks conformance to a configured or signal traffic profile. In-profile traffic (i.e., traffic that meets the configured profile) is injected into the network. Out-of-profile traffic may be marked, delayed, or discarded. Shaping removes jitter but at the expense of some latency. Policing and shaping are performed at the network ingress or logical policing points.

Traditional first-in, first-out (FIFO) queuing provides no service differentiation and can lead to network performance problems, such as increased delay, jitter, packet discard, and so on. IP QoS requires routers to support some form of queue scheduling and management to prioritize outbound packets and control queue depth or to minimize congestion. Several techniques are used for queue management and scheduling. The main approach to queue management is a technique called random early detection (RED). RED monitors time-based average queue length and drops arriving packets with increasing probability as the average length increases. No action is taken if the average length is less than the minimum threshold, and all packets are dropped if the average length is greater than the maximum threshold. The queue-scheduling process decides which packet to send out next. It is used to manage bandwidth resources of the outbound interface. The different solutions involve tradeoffs in function and complexity.

Queuing Mechanisms

Although there are no standards for QoS, most router implementations provide some sort of non-FIFO queuing mechanism that is implementation specific. There are four common mechanisms:

Figure 8.23. IP QoS queue scheduling: Fair Queuing

Figure 8.24. IP QoS queue scheduling: Weighted Fair Queuing

Figure 8.25. IP QoS queue scheduling: Weighted Round Robin

Figure 8.26 illustrates how DRR works. This example shows a quantum value of 1,000 bytes and three different queues. In queue 1, a packet of 1,500 bytes is waiting to be transmitted. Queue 2 has a packet of 800 bytes, and queue 3 has a packet of 1,200 bytes. On the first pass, the packet from queue 1 is not served because it is 1,500 bytes, and the quantum is set at 1,000 bytes. We therefore create a new deficit counter of 1,000. The packet from queue 2 is served because it is less than the quantum. Because the quantum is 1,000 and the packet length is 800 bytes, the next deficit counter is reset to 200. With queue 3, the packet is not served because it is longer than the quantum. This packet is 1,200 bytes, and the quantum is 1,000, so we reset the deficit counter to 1,000, based on the quantum value.

Figure 8.26. IP QoS queue scheduling: Deficit Round Robin

On the second pass, with queue 1, we reset the deficit counter to 1,000 after the first pass. We add that value to the quantum, which is also 1,000, giving us 2,000 bytes. Now the packet can be served because it is 1,500 bytes, which is less than 2,000 bytes. The packet is served, and the deficit counter is then reset with the remainder, which is 500. In queue 2, we already served the packet, so there's now nothing there, and we reset the deficit counter to 0. With queue 3, on the first pass, we reset the deficit counter to 1,000, so now we add the quantum of 1000, which results in a value of 2,000 bytes. The 1,200-byte packet is now served, and the deficit counter is reset to 800.

The IP QoS Continuum

IP QoS has a continuum that involves a range of cost and complexity (see Figure 8.27). At the bottom of the range is best effort. Basically, this implies fair access to all, FIFO queuing, and no priority. Next on the scale is Differentiated Services (DiffServ), which is a bit more costly and complex. With DiffServ, packets carry a class or priority ID, and the routers use per-class forwarding. At the top of the heap is Integrated Services (IntServ). This is the most costly and most complex level, but it allows per-flow state maintenance, uses RSVP signaling, and provides for guaranteed service and bounded delay. Next Steps in Signaling (NSIS) addresses the introduction of QoS on an end-to-end basis. The following sections briefly describe the DiffServ and IntServ QoS schemes as well as NSIS; Chapter 10 describes the first two in more detail.

Figure 8.27. The IP QoS continuum

 

DiffServ

DiffServ is a prioritization model with preferential allocation of resources based on traffic classification. Figure 8.28 shows the DiffServ architecture, in which different packets are of different sizes and colors, suggesting that they have different priorities and queuing. DiffServ supports multiple service levels over an IP-based network. It uses a DSCP to select the servicethat is, the per-hop behaviorthat the packet will receive at each DiffServ-capable node. DSCP is a field in the packets transported over DiffServ networks that classifies the packets according to priority. (The DSCP field was formerly known as the Type of Service field in IPv4 and as the Traffic Class field in IPv6.) DiffServ classifies traffic by marking the IP header at the ingress to the network with flags corresponding to a small number of per-hop behaviors. The per-hop behaviors map to the DSCPs. DiffServ then sorts the packets into queues via the DSCP. The various queues get different treatment in terms of priority, share of bandwidth, and probability of discard.

Figure 8.28. The DiffServ architecture

There are several defined DiffServ per-hop behaviors:

IntServ

IntServ, which is specified in RFC 1633, extends the Internet model to support real-time and best-effort services. It provides extensions to the best-effort service model to allow control over end-to-end packet delays, and its key building blocks are resource reservation and admission control. IntServ, a per-flow, resource reservation model, requires Resource Reservation Protocol (RSVP). RSVP allows applications to reserve router bandwidth. Its service provides a bandwidth guarantee and a reliable upper bound to packet delay. RSVP is therefore a resource reservation setup protocol for the Internet. Its major features include the use of soft state in the routers, receiver-controlled reservation requests, flexible control over sharing of reservations and forwarding of subflows, and the use of IP Multicast for data distribution.

Unfortunately, using RSVP on the public Internet is impractical. The resource requirements for running RSVP on a router increase proportionally with the number of separate RSVP reservations, and this results in a big scalability problem. RSVP signaling has evolved into a general-purpose signaling protocol for enterprise-based IP networks, applications, and services. Classic RSVP (described in RFC 2205) is for application-requested edge-to-edge QoS signaling.

A router-based RSVP modification for MPLS traffic engineering, called RSVP Traffic Engineering (RSVP-TE), is specified under RFC 3209. RSVP-TE is in addition to the RSVP protocol for establishing label-switched paths in MPLS networks. (MPLS is covered in Chapter 10.) RSVP-TE supports the instantiation of explicitly routed label-switched paths with or without resource reservations. It also supports smooth rerouting of label-switched paths, preemption, and loop detection. There are RSVP-TE extensions for fast restoration and extensions for Generalized MPLS. (GMPLS is discussed in Chapter 11.)

Another new standard, called Aggregated RSVP, is specified under RFC 3175. Aggregated RSVP messages install an ingress-to-egress fat pipe. Normal RSVP message flow triggers the creation, expansion, or contraction of the ingress-to-egress fat pipe. Normal RSVP messages are forwarded edge to edge and ignored by interior routers inside the aggregation region. Aggregated RSVP retains the edge-to-edge RSVP signaling paradigm but employs DiffServ forwarding and Aggregated RSVP signaling in the core network.

RSVP Proxy is an extension to RSVP message processing currently in draft form at the IETF. In RSVP Proxy, an intermediate router responds, or proxies, a reservation message back to the sender. In this case, RSVP messages do not travel edge to edge as usual. Proxied reservation is generated under policy control. Applications can signal their presence and receive policy-based designation for special network treatment.

NSIS

Current QoS signaling is limited in scope and scale. Classic RSVP operates between hosts and routers but is absent in the greater Internet. RSVP-TE is edge-to-edge for traffic engineering. RSVP does not cross administrative domains, nor does it really traverse different technology regimes such as wireless or support mobility. An IETF working group has therefore been chartered to develop the requirements, architectures, and protocols for expanding and extending QoS signaling across the Internet, and a new architecture called NSIS is emerging.

The design goals of NSIS include applicability across different QoS technologies, such as DiffServ and MPLS, as well as resource availability upon request prior to a reservation request. NSIS is modular in design. It involves the decoupling of the protocol and the information carried within it. It allows reuse of existing QoS provisioning, where appropriate, and it provides independence between signaling and provisioning mechanisms. As shown in Figure 8.29, the NSIS architecture includes a QoS service class, which specifies QoS requirements of a flow or traffic aggregate. It also includes QoS signaling, which conveys the QoS service class into the network. A QoS initiator triggers QoS signaling. QoS control interprets and acts on QoS signaling messages in the network.

Figure 8.29. NSIS

RSVP version 2 (RSVPv2) defines a lighter version of RSVP signaling that addresses the NSIS requirements. It has been designed to coexist with RSVPv1 and have efficient core functionality, with service-specific extensibility. RSVPv2 allows for unicast operation, and it is sender oriented and soft-state based. It has multiple service specifications, and it accommodates raw IP transport.

Категории