ActionScripting in Flash MX
|
|
8.3 LAN and WAN media QoS features
Before dealing with large-scale architectures for QoS, it is worth investigating what facilities are available at the media level for different local and wide area technologies. These facilities represent the lowest interface available for mapping priority and service-quality requests, and, as we will see, features vary markedly.
8.3.1 LAN QoS features
General IEEE 802 service model
The IEEE 802 service model uses an abstraction called the User Priority Value (UPV) to group traffic according to the Class of Service (CoS) required. The UPV may or may not be physically carried over the network, depending upon the facilities offered by the media type, as follows:
-
Token Ring/IEEE 802.5 carries this value encoded in its FC octet.
-
Ethernet/IEEE 802.3 does not carry it. Only the two levels of priority (high/low) can be recovered, based on the value of priority encoded in the start delimiter (SD).
-
IEEE 802.12 may or may not carry it, depending on the frame format in use.
The revised IEEE 802.1D standard (incorporating 802.1p and 802.1Q) defines a consistent way to carry the value of the UPV over a bridged network comprising Ethernet, Token Ring, Demand Priority, FDDI, or other MAC layer media, and is described in the following text.
IEEE 802.1p and IEEE 802.1Q
Unlike protocols such as FDDI and Token Ring, Ethernet does not offer any useful priority facilities. To address this problem IEEE 802.1p (part of a revised 802.1D bridging standard) uses a special 3-bit field called the user priority value. The user priority value is encoded within a tag field defined by another standard, IEEE 802.1Q (also part 802.1D), which is designed to support VLAN tagging via a 32-bit tag header inserted after the frame's MAC header (note that support for IEEE 802.1p does not imply that VLANs need to be implemented). The 802.1Q tag header comprises the following:
-
Three priority bits are used for signaling 802.1p switches.
-
One bit identifies optional token-ring encapsulation.
-
Twelve bits are used for the virtual LAN ID applied to virtual LAN membership.
-
Sixteen bits are used to modify the EtherType frame.
The IEEE ratified the 802.1p standard in September 1998. 802.1p offers two major functions, as follows:
-
Traffic class expediting—The 802.1p header includes a 3-bit field for prioritization called the User Priority Value (UPV), which enables packets to be grouped into one of eight traffic classes. Higher-priority traffic gets through faster; lower-priority traffic may be dropped if resources are oversubscribed.
-
GMRP multicast filtering—The 802.1p standard also offers the capability to filter multicast traffic dynamically, in order to stop multicast leaking in switches. This facilitates dynamic registration of multicast groups, ensuring that traffic is delivered only to those who request it.
UPV to traffic type mappings are listed as follows in increasing priority order (for further information see [24, 25]. Note that the value 0, the default, best effort, has a higher priority than value 2, standard.
-
1—binary 001: Background, loss-eligible traffic
-
2—binary 010: Standard (spare)
-
0—binary 000: Best-effort default, invoked automatically when no other value has been set
-
3—Binary 011: Excellent effort (for business-critical traffic such as SAP)
-
4—binary 100: Controlled load (for applications such as streaming multimedia)
-
5—binary 101: Video, interactive, and delay-sensitive (less than 100 ms latency and jitter)
-
6—binary 110: Voice, interactive, and delay-sensitive (less than 10 ms latency and jitter)
-
7—binary 111: Network control (reserved traffic for control protocols such as RIP and OSPF)
This mapping enables support for policy-based QoS by specifying a Class of Service (CoS) on a frame basis. Lower-priority packets are deferred if there are packets waiting in higher-priority queues. This enables the differentiation of real-time data, voice, and video traffic from normal (best effort) data traffic such as e-mail. This solution benefits businesses that want to deploy real-time multimedia traffic and providers offering SLAs that extend into the LAN environment.
The IEEE specifications make no assumptions about how UPV is to be used by end stations or by the network nor do they provide recommendations about how a sender should select the UPV (the interested reader is referred to [26]). In practice the UPV may be set by workstations, servers, routers, or Layer 3 switches. In host devices 802.1p-compliant Network Interface Cards (NICs) are able to set or read the UPV bits. Hubs and switches can use this information to prioritize traffic prior to forwarding to other LAN segments. For example, without any form of prioritization a switch would typically delay or drop packets in response to congestion. On an 802.1p-enabled switch, a packet with a higher priority receives preferential treatment and is serviced before a packet with a lower priority; therefore, lower priority-traffic is more likely to be dropped in preference. The basic operations are as follows:
-
Unless a host computer has properly negotiated QoS with the network, the host should only mark packets transmitted with a best-effort priority value. If the host computer has a packet scheduler installed, the host uses the appropriate QoS signaling components to negotiate with the network for higher 802.1p priority values.
-
Routers and Layer 3 switches use packet payload information such as TCP and UDP port numbers to differentiate application flows. This information can be used together with network administrator-defined policies to classify any untagged packets.
-
Since Layer 2 switches cannot see data above the MAC layer, the 802.1p specification enables them to understand packet priorities but not perform classification.
-
In addition to these functions, switches and routers must employ multiple output queues (so-called multiqueue hardware) for each port to be capable of processing priority requests effectively. The 802.1p specification assumes multiqueue hardware implicitly in that it recommends how the various traffic classes should be assigned in systems with multiple queues per port.
-
IEEE 802.1D defines static priority queuing as the default mode of operation of switches that implement multiple queues; the UPV is really a priority only in a loose sense, since it depends on the number of traffic classes actually implemented by a switch.
The general switch algorithm is as follows. Packets are queued within a particular traffic class based on the received UPV, the value of which is either obtained directly from the packet, if an IEEE 802.1Q header or IEEE 802.5 network is used, or is assigned according to some local policy. The queue is selected based on a mapping from UPV (0 through 7) onto the number of available traffic classes. A switch may implement one or more traffic classes. The advertised IntServ parameters and the switch's admission control behavior may be used to determine the mapping from UPV to traffic classes within the switch. A switch is not precluded from implementing other scheduling algorithms, such as weighted fair queuing and round robin.
Issues with 802.1p
There are several open issues with 802.1p, including the following:
-
IEEE 802.1p specifies no admission control protocols. It would be possible to give network control priority to all packets and the network would be easily congested. Currently this is viewed as the responsibility of NIC drivers (i.e., self-regulating and therefore potentially open to abuse).
-
IEEE 802.1p does not limit the amount of resources a particular application consumes. A mechanism to negotiate a guaranteed QoS for each application, end to end, according to the network policy maintained by local network administrators would be a useful improvement.
-
Interoperability has been a major problem but is being addressed through initiatives such as the Subnet Bandwidth Manager (SBM) [YAV98]. This specification deals with how to deliver end-to-end QoS to and from the Ethernet to other networks.
-
Some older network analyzers may not decode 802.1p protocol information properly.
Several of the leading switches now support the standard. For example, 3Com supports 802.1p in several products via its DynamicAccess software with its industry-leading EtherLink and EtherLink Server NICs, at the core with the CoreBuilder 3500 Layer 3 switch, and now in the wiring closet with the new SuperStack II Switch 1100 and 3300 software. At the desktop, Microsoft supports 802.1p in its Windows 98 and Windows 2000 operating systems, including other QoS mechanisms such as differentiated services and RSVP Support for 802.1p priority was included in NDIS 5.0.
Ethernet/IEEE 802.3
There is no explicit traffic class or UPV field carried in Ethernet packets. This means that UPV must be regenerated at a downstream receiver or switch according to some defaults or by parsing further into higher-layer protocol fields in the packet. Alternatively, IEEE 802.1p with 802.1Q encapsulation may be used to provide an explicit UPV field on top of the basic MAC frame format. For the different IP packet encapsulations used over Ethernet/IEEE 802.3, it will be necessary to adjust any admission control calculations according to the framing and padding requirements, as listed in the following chart.
Encapsulation | Frame Overhead | IP MTU |
---|---|---|
EtherType | 18 bytes | 1,500 bytes |
EtherType + IEEE 802.1D/Q | 22 bytes | 1,500 bytes |
EtherType + LLC/SNAP | 24 bytes | 1,492 bytes |
Note that the packet length of an Ethernet frame using the IEEE 802.1Q specification exceeds the current IEEE 802.3 MTU value (1,518 bytes) by 4 bytes. The change of maximum MTU size for IEEE 802.1Q frames is being accommodated by IEEE 802.3ac [28].
Token Ring/IEEE 802.5
The Token Ring standard [29] provides a priority mechanism to control both the queuing of packets for transmission and the access of packets to the shared media. The access priority features are an integral part of the token-passing protocol. The Token Ring priority mechanisms are implemented using bits within the Access Control (AC) and the Frame Control (FC) fields in either a token or a frame. Token Ring has the following characteristics:
-
Access priority is indicated by the first three bits (called the token priority bits) of the AC field.
-
Token Ring also uses a concept of reserved priority, which relates to the value of priority that a station uses to reserve the token for the next transmission on the ring. Reservation of a priority level is indicated in the last three bits (the reservation bits) of the AC field by a node requiring higher transmission priority. When a free token is circulating, only a station having an access priority greater than or equal to the reserved priority in the token will be allowed to seize the token for transmission. If the passing token or frame already contains a priority reservation higher than the desired one, the ring station must leave the reservation bits unchanged. If, however, the token's reservation bits have not yet been set (i.e., binary 000) or indicate a lower priority than the desired one, the ring station can set the reservation bits to its required priority. A node originating a token of higher priority enters priority-hold state (also called a stacking station in the IEEE 802.5 token-passing ring standards). Readers are referred to [27] for further discussion of this topic.
-
The last three bits of the FC field of an LLC frame (the user priority bits) are obtained from the higher layer in the UPV parameter when it requests transmission of a packet. This parameter also establishes the access priority used by the MAC. The UPV is conveyed end-to-end by the user priority bits in the FC field and is typically preserved through Token Ring bridges of all types. In all cases, 0 is the lowest priority.
A Token Ring station is theoretically capable of separately queuing each of the eight levels of requested UPV and then transmitting frames in order of priority. Table 8.3 lists the recommended use of the priority levels (note that different implementations of Token Ring may deviate from these definitions). A station sets reservation bits according to the UPV of frames that are queued for transmission in the highest-priority queue. This allows the access mechanism to ensure that the frame with the highest priority throughout the entire ring will be transmitted before any lower-priority frame.
Priority bit Settings | Description | |
---|---|---|
Dec | Binary | |
0 | 000 | Normal user priority- None Time-Critical data |
1 | 001 | Normal user priority - None Time-Critical data |
2 | 010 | Normal user priority - None Time-Critical data |
3 | 011 | Normal user priority - None Time-Critical data |
4 | 100 | LAN Management |
5 | 101 | Time-Sensitive data |
6 | 110 | Real-Time-Critical data |
7 | 111 | MAC Frames |
To prevent a high-priority station from monopolizing the LAN medium and to make sure the priority eventually can come down again, the protocol provides fairness within each priority level. To reduce frame jitter associated with high-priority traffic, Annex I to the IEEE 802.5 Token Ring standard recommends that stations transmit only one frame per token and that the maximum information field size be 4,399 bytes whenever delay-sensitive traffic is traversing the ring. Most existing implementations of Token Ring bridges forward all LLC frames with a default access priority of 4. Annex I recommends that bridges forward LLC frames that have a UPV greater than 4 with a reservation equal to the UPV (although the draft IEEE 802.1D [30] permits network management to override this behavior). The capabilities provided by the Token Ring architecture, such as user priority and reserved priority, can provide effective support for integrated services flows that require QoS guarantees.
For the different IP packet encapsulations used over Token Ring/IEEE 802.5, it will be necessary to adjust any admission control calculations according to the framing requirements listed in the following chart.
Encapsulation | Frame Overhead | IP MTU |
---|---|---|
EtherType + IEEE 802.1D/Q | 29 bytes | 4,370 bytes |
EtherType + LLC/SNAP | 25 bytes | 4,370 bytes |
Note that the suggested MTU specified in RFC 1042 [24] is 4,464 bytes, but there are issues related to discovering what the maximum supported MTU between any two points within and between Token Ring subnets is. The MTU reported here is consistent with the IEEE 802.5 Annex I recommendation.
Fiber Distributed Data Interface (FDDI)
The FDDI standard [29] provides a priority mechanism that can be used to control both packet queuing and access to the shared media. The priority mechanisms are implemented using mechanisms similar to Token Ring; however, access priority is based upon timers rather than token manipulation. For the discussion of QoS mechanisms, FDDI is treated as a 100-Mbps Token Ring technology using a service interface compatible with IEEE 802 networks. FDDI supports real-time allocation of network bandwidth, making it suitable for a variety of different applications, including interactive multimedia. FDDI defines two main traffic classes, as follows:
-
Synchronous traffic—Used by nodes requiring continuous transmission capability (say for voice or videoconferencing applications); it can be configured to use synchronous bandwidth. The FDDI SMT specification defines a distributed bidding scheme to allocate FDDI bandwidth.
-
Asynchronous traffic—Bandwidth is allocated using an eight-level priority scheme. Each station is assigned an asynchronous priority level. FDDI also permits extended dialog, where stations may temporarily use all asynchronous bandwidth.
Synchronous data traffic is supported through strict media access and delay guarantees and is allowed to consume a fixed proportion of the network bandwidth, leaving the remainder for asynchronous traffic. Stations that require standard client/server or file transfer applications typically use FDDI's asynchronous bandwidth. The FDDI priority mechanism can lock out stations that cannot use synchronous bandwidth and have too low an asynchronous priority.
100BaseVG//EEE802.12
IEEE 802.12 is a standard for a shared 100-Mbps LAN and is specified in [31]. The MAC protocol for 802.12 is called demand priority. It supports two service priority levels: normal priority and high priority (ensuring guaranteed bandwidth). Data packets from all network nodes (hosts, bridges, and switches) are served using a simple round-robin algorithm. Demand priority enables data transfer with very low latency across a LAN hub by using on-the-fly packet transfer. The demand priority protocol supports guaranteed bandwidth through a priority arrangement, as follows:
-
When an end system wishes to transmit, it issues a request.
-
This request is sent to a switch and is handled immediately if there is no other active request (i.e., on an FCFS basis).
-
When a packet is transmitted to the hub, the hub determines the appropriate output port in real time and switches the packet to that port.
Demand priority is deterministic in that it ensures that all high-priority frames have strict priority over frames with normal priority, and even normal priority packets have a maximum guaranteed access time to the medium (in a situation where a normal priority packet has been waiting at the head of an output queue for more than packet promotion [i.e., 200–300 ms] its priority is automatically promoted to high priority [31]).
Essentially there are three mechanisms for mapping UPV values onto 802.12 frames, as follows:
-
With 802.3 frames the UPV is encoded in the Starting Delimiter (SD) of the 802.12 frame.
-
With 802.5 frames the UPV is encoded in the user priority bits of the FC field in the 802.5 frame header.
-
IEEE 802.1Q encapsulation may also be used, encoding the UPV within the 802.1Q tag.
In all cases, switches are able to recover any UPV supplied by a sender. The same rules apply for 802.12 UPV mapping in a bridge. The only additional information is that normal priority is used by default for UPV values 0 through 4, and high priority is used for UPV levels 5 through 7. This ensures that the default Token Ring UPV level of 4 for 802.5 bridges is mapped to normal priority on 802.12 segments.
Integrated services can be built on top of the 802.12 medium access mechanisms. When combined with admission control and bandwidth enforcement mechanisms, delay guarantees as required for a guaranteed service can be provided without any changes to the existing 802.12 MAC protocol. Note that since the 802.12 standard supports the 802.3 and 802.5 frame formats, the same framing overhead issues must be considered in the admission control computations for 802.12 links.
8.3.2 WAN QoS Features
SMDS
SMDS provides mechanisms to facilitate QoS for data traffic by supporting a number of access classes to accommodate different traffic profiles and equipment capabilities. The provider configures access classes at the time of subscription. Access classes define a maximum sustained information rate, SIR, as well as the maximum burst size, N, allowed. This is implemented as a leaky bucket scheme. Five access classes, corresponding to sustained information rates 4, 10, 16, 25, and 34 Mbps, are supported for the DS-3 access interface, implemented through credit management algorithms. These algorithms track credit balances for each customer interface. Credit is allocated on a periodic basis, up to some maximum, and as packets are sent to the network, the credit balance is decremented. This credit management scheme essentially constrains the customer's equipment to some sustained or average rate of data transfer. This average rate of transfer is less than the full information-carrying bandwidth of the DS-3 access facility. The credit management scheme is not applied to DS-1 access interfaces.
Frame Relay
Frame Relay uses a simplified protocol at each switching node, omitting flow control. Performance of Frame Relay networks is, therefore, greatly influenced by the offered load. When the offered load is high, some Frame Relay nodes may become overloaded, resulting in a degraded throughput; hence, additional mechanisms are required to control congestion, as follows:
-
Admission control—This ensures that a request for resources, once accepted, is guaranteed. A decision is made whether to accept a new connection request based on the requested traffic descriptor and the network's residual capacity. The traffic descriptor comprises a set of parameters sent to switching nodes at call setup time (or service subscription time), which characterizes the connection's statistical properties. The traffic descriptor consists of the following elements:
-
Committed Information Rate (CIR)—The average rate (in bits per second) at which the network guarantees to transfer information units over a measurement interval, T, where T = Bc/CIR.
-
Committed burst size—The maximum number of information units that can be transmitted during the interval T.
-
Excess burst size—The maximum number of uncommitted information units (in bits) that the network will attempt to carry during the interval T.
-
The public network is obliged to deliver all data submitted to the network that comes within the user's CIR when the network is operating normally. Therefore, public networks should be sized according to the CIRs from all access devices (the available bandwidth must always be greater than the total CIR). Some service providers will offer an absolute guarantee of data delivery within the CIR period, while others will offer only a probability of guaranteed delivery.
ATM
The architecture for services provided at the ATM layer consists of the following five service categories [10]:
-
CBR—Constant Bit Rate
-
rt-VBR—Real-Time Variable Bit Rate
-
nrt-VBR—Non-Real-Time Variable Bit Rate
-
UBR—Unspecified Bit Rate
-
ABR—Available Bit Rate
These service categories relate traffic characteristics and QoS requirements to network behavior. Service categories are differentiated as real-time or non-real-time. There are two real-time categories, CBR and rt-VBR, distinguished by whether the traffic descriptor contains only the Peak Cell Rate (PCR) or both PCR and the Sustainable Cell Rate (SCR) parameters. The three non-real-time categories are nrt-VBR, UBR, and ABR. All service categories apply to both VCCs and VPCs.
There are no mechanisms defined within ATM to bound error or loss rates, since ATM relies on the quality of the underlying physical infrastructure. Traditional ATM links (such as SONET/SDH and DS3) exhibit very low bit-error rates; however, the amount of cell loss depends very much on the architecture of ATM switches and terminal equipment (some switches are known to lose cells, even at fairly low traffic levels, but this should improve as the technology matures). The ATM layer offers error detection and single-bit error correction (optional) on cell headers but no payload protection. AAL1 detects errors in sequencing information but does not provide payload protection. AAL3/4 and AAL5 both support error detection, 3/4 on a cell-by-cell basis and 5 on a complete-packet basis. ATM does not offer an assured service; to date, the only truly reliable end-to-end ATM service is that provided via the Service-Specific Connection-Oriented Protocol (SSCOP), a protocol that runs over AAL5. SSCOP is used by the ATM signaling protocol, Q.2931.
There is an implicit contract that includes specific QoS guarantees for bandwidth, delay, delay variation, and so on with every virtual channel. However, there is not yet consensus on the level of commitment implied by these guarantees or the conditions that may void them. Consequently, experienced quality of service may vary widely from vendor to vendor and from one traffic environment to another. There will have to be agreements made on what constitutes acceptable service for the various classes of ATM and how that service will be provided. Developing mechanisms to ensure quality of service in ATM is one of the most important topics that the ATM Forum is currently investigating.
|
|