ActionScripting in Flash MX

8.3 LAN and WAN media QoS features

Before dealing with large-scale architectures for QoS, it is worth investigating what facilities are available at the media level for different local and wide area technologies. These facilities represent the lowest interface available for mapping priority and service-quality requests, and, as we will see, features vary markedly.

8.3.1 LAN QoS features

General IEEE 802 service model

The IEEE 802 service model uses an abstraction called the User Priority Value (UPV) to group traffic according to the Class of Service (CoS) required. The UPV may or may not be physically carried over the network, depending upon the facilities offered by the media type, as follows:

The revised IEEE 802.1D standard (incorporating 802.1p and 802.1Q) defines a consistent way to carry the value of the UPV over a bridged network comprising Ethernet, Token Ring, Demand Priority, FDDI, or other MAC layer media, and is described in the following text.

IEEE 802.1p and IEEE 802.1Q

Unlike protocols such as FDDI and Token Ring, Ethernet does not offer any useful priority facilities. To address this problem IEEE 802.1p (part of a revised 802.1D bridging standard) uses a special 3-bit field called the user priority value. The user priority value is encoded within a tag field defined by another standard, IEEE 802.1Q (also part 802.1D), which is designed to support VLAN tagging via a 32-bit tag header inserted after the frame's MAC header (note that support for IEEE 802.1p does not imply that VLANs need to be implemented). The 802.1Q tag header comprises the following:

The IEEE ratified the 802.1p standard in September 1998. 802.1p offers two major functions, as follows:

UPV to traffic type mappings are listed as follows in increasing priority order (for further information see [24, 25]. Note that the value 0, the default, best effort, has a higher priority than value 2, standard.

This mapping enables support for policy-based QoS by specifying a Class of Service (CoS) on a frame basis. Lower-priority packets are deferred if there are packets waiting in higher-priority queues. This enables the differentiation of real-time data, voice, and video traffic from normal (best effort) data traffic such as e-mail. This solution benefits businesses that want to deploy real-time multimedia traffic and providers offering SLAs that extend into the LAN environment.

The IEEE specifications make no assumptions about how UPV is to be used by end stations or by the network nor do they provide recommendations about how a sender should select the UPV (the interested reader is referred to [26]). In practice the UPV may be set by workstations, servers, routers, or Layer 3 switches. In host devices 802.1p-compliant Network Interface Cards (NICs) are able to set or read the UPV bits. Hubs and switches can use this information to prioritize traffic prior to forwarding to other LAN segments. For example, without any form of prioritization a switch would typically delay or drop packets in response to congestion. On an 802.1p-enabled switch, a packet with a higher priority receives preferential treatment and is serviced before a packet with a lower priority; therefore, lower priority-traffic is more likely to be dropped in preference. The basic operations are as follows:

The general switch algorithm is as follows. Packets are queued within a particular traffic class based on the received UPV, the value of which is either obtained directly from the packet, if an IEEE 802.1Q header or IEEE 802.5 network is used, or is assigned according to some local policy. The queue is selected based on a mapping from UPV (0 through 7) onto the number of available traffic classes. A switch may implement one or more traffic classes. The advertised IntServ parameters and the switch's admission control behavior may be used to determine the mapping from UPV to traffic classes within the switch. A switch is not precluded from implementing other scheduling algorithms, such as weighted fair queuing and round robin.

Issues with 802.1p

There are several open issues with 802.1p, including the following:

Several of the leading switches now support the standard. For example, 3Com supports 802.1p in several products via its DynamicAccess software with its industry-leading EtherLink and EtherLink Server NICs, at the core with the CoreBuilder 3500 Layer 3 switch, and now in the wiring closet with the new SuperStack II Switch 1100 and 3300 software. At the desktop, Microsoft supports 802.1p in its Windows 98 and Windows 2000 operating systems, including other QoS mechanisms such as differentiated services and RSVP Support for 802.1p priority was included in NDIS 5.0.

Ethernet/IEEE 802.3

There is no explicit traffic class or UPV field carried in Ethernet packets. This means that UPV must be regenerated at a downstream receiver or switch according to some defaults or by parsing further into higher-layer protocol fields in the packet. Alternatively, IEEE 802.1p with 802.1Q encapsulation may be used to provide an explicit UPV field on top of the basic MAC frame format. For the different IP packet encapsulations used over Ethernet/IEEE 802.3, it will be necessary to adjust any admission control calculations according to the framing and padding requirements, as listed in the following chart.

Encapsulation

Frame Overhead

IP MTU

EtherType

18 bytes

1,500 bytes

EtherType + IEEE 802.1D/Q

22 bytes

1,500 bytes

EtherType + LLC/SNAP

24 bytes

1,492 bytes

Note that the packet length of an Ethernet frame using the IEEE 802.1Q specification exceeds the current IEEE 802.3 MTU value (1,518 bytes) by 4 bytes. The change of maximum MTU size for IEEE 802.1Q frames is being accommodated by IEEE 802.3ac [28].

Token Ring/IEEE 802.5

The Token Ring standard [29] provides a priority mechanism to control both the queuing of packets for transmission and the access of packets to the shared media. The access priority features are an integral part of the token-passing protocol. The Token Ring priority mechanisms are implemented using bits within the Access Control (AC) and the Frame Control (FC) fields in either a token or a frame. Token Ring has the following characteristics:

A Token Ring station is theoretically capable of separately queuing each of the eight levels of requested UPV and then transmitting frames in order of priority. Table 8.3 lists the recommended use of the priority levels (note that different implementations of Token Ring may deviate from these definitions). A station sets reservation bits according to the UPV of frames that are queued for transmission in the highest-priority queue. This allows the access mechanism to ensure that the frame with the highest priority throughout the entire ring will be transmitted before any lower-priority frame.

Table 8.3: Recommended Use of Token Ring User Priority

Priority bit Settings

Description

Dec

Binary

0

000

Normal user priority- None Time-Critical data

1

001

Normal user priority - None Time-Critical data

2

010

Normal user priority - None Time-Critical data

3

011

Normal user priority - None Time-Critical data

4

100

LAN Management

5

101

Time-Sensitive data

6

110

Real-Time-Critical data

7

111

MAC Frames

To prevent a high-priority station from monopolizing the LAN medium and to make sure the priority eventually can come down again, the protocol provides fairness within each priority level. To reduce frame jitter associated with high-priority traffic, Annex I to the IEEE 802.5 Token Ring standard recommends that stations transmit only one frame per token and that the maximum information field size be 4,399 bytes whenever delay-sensitive traffic is traversing the ring. Most existing implementations of Token Ring bridges forward all LLC frames with a default access priority of 4. Annex I recommends that bridges forward LLC frames that have a UPV greater than 4 with a reservation equal to the UPV (although the draft IEEE 802.1D [30] permits network management to override this behavior). The capabilities provided by the Token Ring architecture, such as user priority and reserved priority, can provide effective support for integrated services flows that require QoS guarantees.

For the different IP packet encapsulations used over Token Ring/IEEE 802.5, it will be necessary to adjust any admission control calculations according to the framing requirements listed in the following chart.

Encapsulation

Frame Overhead

IP MTU

EtherType + IEEE 802.1D/Q

29 bytes

4,370 bytes

EtherType + LLC/SNAP

25 bytes

4,370 bytes

Note that the suggested MTU specified in RFC 1042 [24] is 4,464 bytes, but there are issues related to discovering what the maximum supported MTU between any two points within and between Token Ring subnets is. The MTU reported here is consistent with the IEEE 802.5 Annex I recommendation.

Fiber Distributed Data Interface (FDDI)

The FDDI standard [29] provides a priority mechanism that can be used to control both packet queuing and access to the shared media. The priority mechanisms are implemented using mechanisms similar to Token Ring; however, access priority is based upon timers rather than token manipulation. For the discussion of QoS mechanisms, FDDI is treated as a 100-Mbps Token Ring technology using a service interface compatible with IEEE 802 networks. FDDI supports real-time allocation of network bandwidth, making it suitable for a variety of different applications, including interactive multimedia. FDDI defines two main traffic classes, as follows:

Synchronous data traffic is supported through strict media access and delay guarantees and is allowed to consume a fixed proportion of the network bandwidth, leaving the remainder for asynchronous traffic. Stations that require standard client/server or file transfer applications typically use FDDI's asynchronous bandwidth. The FDDI priority mechanism can lock out stations that cannot use synchronous bandwidth and have too low an asynchronous priority.

100BaseVG//EEE802.12

IEEE 802.12 is a standard for a shared 100-Mbps LAN and is specified in [31]. The MAC protocol for 802.12 is called demand priority. It supports two service priority levels: normal priority and high priority (ensuring guaranteed bandwidth). Data packets from all network nodes (hosts, bridges, and switches) are served using a simple round-robin algorithm. Demand priority enables data transfer with very low latency across a LAN hub by using on-the-fly packet transfer. The demand priority protocol supports guaranteed bandwidth through a priority arrangement, as follows:

Demand priority is deterministic in that it ensures that all high-priority frames have strict priority over frames with normal priority, and even normal priority packets have a maximum guaranteed access time to the medium (in a situation where a normal priority packet has been waiting at the head of an output queue for more than packet promotion [i.e., 200–300 ms] its priority is automatically promoted to high priority [31]).

Essentially there are three mechanisms for mapping UPV values onto 802.12 frames, as follows:

In all cases, switches are able to recover any UPV supplied by a sender. The same rules apply for 802.12 UPV mapping in a bridge. The only additional information is that normal priority is used by default for UPV values 0 through 4, and high priority is used for UPV levels 5 through 7. This ensures that the default Token Ring UPV level of 4 for 802.5 bridges is mapped to normal priority on 802.12 segments.

Integrated services can be built on top of the 802.12 medium access mechanisms. When combined with admission control and bandwidth enforcement mechanisms, delay guarantees as required for a guaranteed service can be provided without any changes to the existing 802.12 MAC protocol. Note that since the 802.12 standard supports the 802.3 and 802.5 frame formats, the same framing overhead issues must be considered in the admission control computations for 802.12 links.

8.3.2 WAN QoS Features

SMDS

SMDS provides mechanisms to facilitate QoS for data traffic by supporting a number of access classes to accommodate different traffic profiles and equipment capabilities. The provider configures access classes at the time of subscription. Access classes define a maximum sustained information rate, SIR, as well as the maximum burst size, N, allowed. This is implemented as a leaky bucket scheme. Five access classes, corresponding to sustained information rates 4, 10, 16, 25, and 34 Mbps, are supported for the DS-3 access interface, implemented through credit management algorithms. These algorithms track credit balances for each customer interface. Credit is allocated on a periodic basis, up to some maximum, and as packets are sent to the network, the credit balance is decremented. This credit management scheme essentially constrains the customer's equipment to some sustained or average rate of data transfer. This average rate of transfer is less than the full information-carrying bandwidth of the DS-3 access facility. The credit management scheme is not applied to DS-1 access interfaces.

Frame Relay

Frame Relay uses a simplified protocol at each switching node, omitting flow control. Performance of Frame Relay networks is, therefore, greatly influenced by the offered load. When the offered load is high, some Frame Relay nodes may become overloaded, resulting in a degraded throughput; hence, additional mechanisms are required to control congestion, as follows:

The public network is obliged to deliver all data submitted to the network that comes within the user's CIR when the network is operating normally. Therefore, public networks should be sized according to the CIRs from all access devices (the available bandwidth must always be greater than the total CIR). Some service providers will offer an absolute guarantee of data delivery within the CIR period, while others will offer only a probability of guaranteed delivery.

ATM

The architecture for services provided at the ATM layer consists of the following five service categories [10]:

These service categories relate traffic characteristics and QoS requirements to network behavior. Service categories are differentiated as real-time or non-real-time. There are two real-time categories, CBR and rt-VBR, distinguished by whether the traffic descriptor contains only the Peak Cell Rate (PCR) or both PCR and the Sustainable Cell Rate (SCR) parameters. The three non-real-time categories are nrt-VBR, UBR, and ABR. All service categories apply to both VCCs and VPCs.

There are no mechanisms defined within ATM to bound error or loss rates, since ATM relies on the quality of the underlying physical infrastructure. Traditional ATM links (such as SONET/SDH and DS3) exhibit very low bit-error rates; however, the amount of cell loss depends very much on the architecture of ATM switches and terminal equipment (some switches are known to lose cells, even at fairly low traffic levels, but this should improve as the technology matures). The ATM layer offers error detection and single-bit error correction (optional) on cell headers but no payload protection. AAL1 detects errors in sequencing information but does not provide payload protection. AAL3/4 and AAL5 both support error detection, 3/4 on a cell-by-cell basis and 5 on a complete-packet basis. ATM does not offer an assured service; to date, the only truly reliable end-to-end ATM service is that provided via the Service-Specific Connection-Oriented Protocol (SSCOP), a protocol that runs over AAL5. SSCOP is used by the ATM signaling protocol, Q.2931.

There is an implicit contract that includes specific QoS guarantees for bandwidth, delay, delay variation, and so on with every virtual channel. However, there is not yet consensus on the level of commitment implied by these guarantees or the conditions that may void them. Consequently, experienced quality of service may vary widely from vendor to vendor and from one traffic environment to another. There will have to be agreements made on what constitutes acceptable service for the various classes of ATM and how that service will be provided. Developing mechanisms to ensure quality of service in ATM is one of the most important topics that the ATM Forum is currently investigating.

Категории