Wide Area Network (WAN) Technologies

Overview

To successfully troubleshoot TCP/IP problems on a wide area network (WAN), it isimportant to understand how IP datagrams and Address Resolution Protocol (ARP) messages are encapsulated by a computer running a member of the Microsoft Windows Server 2003 family or Windows XP that uses a WAN technology such as T-carrier, Public Switched Telephone Network (PSTN), Integrated Services Digital Network (ISDN), X.25, frame relay, or Asynchronous Transfer Mode (ATM). It is also important to understand WAN technology encapsulations to interpret the WAN encapsulation portions of a frame when using Microsoft Network Monitor or other types of WAN frame capture programs orfacilities.

WAN Encapsulations

As discussed in Chapter 1, "Local Area Network (LAN) Technologies," IP datagrams are an Open Systems Interconnection (OSI) Network Layer entity that requires a Data Link Layer encapsulation before being sent on a physical medium. For WAN technologies, the Data Link Layer encapsulation provides the following services:

This chapter discusses WAN technologies and their encapsulations for IP datagrams and ARP messages. WAN encapsulations are divided into two categories based on the types of IP networks of the WAN link:

Point to Point Encapsulation

The two most prominent industry standard encapsulations for sending IP datagrams over a point-to-point link are Serial Line Internet Protocol (SLIP) and Point-to-Point Protocol (PPP).

SLIP

As RFC 1055 describes, SLIP is a very simple packet-framing protocol that offers only frame delimitation services. SLIP does not provide protocol identification or bit-level integrity verification services. SLIP was designed to be easy to implement for links that did not require these types of services.

  More Info

SLIP is described in RFC 1055, which can be found in the Rfc folder on the companion CD-ROM.

To delimit IP datagrams, SLIP uses a special character called the END character (0xC0), which is placed at the beginning and end of each IP datagram. Successive IP datagrams have two END characters between them: one to mark the end of one datagram and one to mark the beginning of another.

The END character presents a problem because if it occurs within the IP datagram and is sent unmodified, the receiving node interprets the END character as the marker for the end of the IP datagram. If this happens, the originally sent IP datagram is truncated and is eventually discarded because of failed checksums in the IP header and upper layer protocol headers. Figure 2-1 shows a SLIP-encapsulated IP datagram.

Figure 2-1: SLIP encapsulation, showing the simple frame delimitation services for an IP datagram.

To prevent the occurrence of the END character within the IP datagram, SLIP uses a technique called character stuffing. The END character is escaped, or replaced, with a sequence beginning with another special character called the ESC (0xDB) character. The SLIP ESC character has no relation to the American Standard Code for Information Interchange (ASCII) ESC character.

If the END character occurs within the original IP datagram, it is replaced with thesequence 0xDB-DC. To prevent the misinterpretation of the ESC character by the receiving node, if the ESC (0xDB) character occurs within the original IP datagram, it is replaced with the sequence 0xDB-DD. Therefore:

Figure 2-2 shows SLIP character stuffing.

Figure 2-2: SLIP character stuffing, showing the escaping of the END and ESC characters within an IP datagram.

As RFC 1055 describes, the maximum size of an IP datagram over a SLIP connection is 1006 bytes—the size imposed by Berkeley UNIX drivers that existed when the RFC was written. Most systems adhere to the industry standard maximum size of 1006 bytes. However, some systems, such as those running the Windows Server 2003 family orWindows XP, allow a maximum packet size of 1500 bytes over a SLIP connection to prevent fragmentation of IP datagrams when SLIP links are used in conjunction with Ethernet network segments.

Although SLIP does not provide for the negotiation of compression methods during the connection setup, SLIP does support a compression scheme known as Compressed SLIP (C-SLIP).

  More Info

RFC 1144 describes C-SLIP and how it is used to compress IP and TCP headers to a 3- to 5-byte header on the SLIP link. This RFC can be found in the Rfc folder on the companion CD-ROM.

Dial-up network connections for the Windows Server 2003 family and Windows XP use SLIP and C-SLIP to create SLIP-based remote access connections to a network access server. The incoming connections feature of the Network Connections folder and the Windows Server 2003 family Routing and Remote Access service does not support SLIP or C-SLIP.

PPP

PPP is a standardized point-to-point network encapsulation method that addresses the shortcomings of SLIP and provides Data Link Layer functionality comparable to LAN encapsulations. PPP provides frame delimitation, protocol identification, and bit-level integrity services.

  More Info

PPP is described in RFC 1661, which can be found in the Rfc folder on the companion CD-ROM.

RFC 1661 describes PPP as a suite of protocols that provide the following:

This chapter discusses only the Data Link Layer encapsulation. Chapter 4, "Point-to-Point Protocol (PPP)," describes LCP and the NCPs needed for IP connectivity.

PPP encapsulation and framing is based on the International Organization for Standardization (ISO) High-Level Data Link Control (HDLC) protocol. HDLC was derived from the Synchronous Data Link Control (SDLC) protocol developed by IBM for the Systems Network Architecture (SNA) protocol suite. HDLC encapsulation for PPP frames is shown in Figure 2-3.

Figure 2-3: PPP encapsulation using HDLC framing for an IP datagram.

  More Info

HDLC encapsulation for PPP frames is described in RFC 1662, which can be found in the Rfc folder on the companion CD-ROM.

The fields in the PPP header and trailer are defined as follows:

The HDLC encapsulation for PPP frames is also used for Asymmetric Digital Subscriber Line (ADSL) broadband Internet connections.

Figure 2-4 shows a typical PPP framing for an IP datagram when using Address and Control field suppression and Protocol field compression.

Figure 2-4: Typical PPP encapsulation for an IP datagram.

This abbreviated form of PPP framing is a result of the following:

PPP on Asynchronous Links

As in SLIP, PPP on asynchronous links such as analog phone lines uses character stuffing to prevent the occurrence of the FLAG character within the PPP payload. The FLAG character is escaped, or replaced, with a sequence beginning with another special character called the ESC (0x7D) character. The PPP ESC character has no relation to the ASCII or SLIP ESC character.

If the FLAG character occurs within the original IP datagram, it is replaced with thesequence 0x7D-5E. To prevent the misinterpretation of the ESC character by the receiving node, if the ESC (0x7D) character occurs within the original IP datagram, it is replaced with the sequence 0x7D-5D. Therefore:

Additionally, character stuffing is used to stuff characters with values less than 0x20 (32 in decimal notation) to prevent these characters from being misinterpreted as control characters when software flow control is used over asynchronous links. The escape sequence for these characters is 0x7D-x, where x is the original character with the fifth bit set to 1. The fifth bit is defined as the third bit from the high-order bit using the bit position designation of 7-6-5-4-3-2-1-0. Therefore, the character 0x11 (bit sequence 0-0-0-1-0-0-0-1) would be escaped to the sequence 0x7D-31 (bit sequence 0-0-1-1-0-0-0-1).

The use of character stuffing for characters less than 0x20 is negotiated using the Asynchronous Control Character Map (ACCM) LCP option. This LCP option uses a 32-bit bitmap to indicate exactly which character values need to be escaped.

  More Info

For more information on the ACCM LCP option, see RFCs 1661 and 1662. These can be found in the Rfc folder on the companion CD-ROM.

PPP on Synchronous Links

Character stuffing is an inefficient method of escaping the FLAG character. If the PPP payload consists of a stream of 0x7E characters, character stuffing roughly doubles the size of the PPP frame as it is sent on the medium. For asynchronous, byte-boundary media such as analog phone lines, character stuffing is the only alternative.

On synchronous links such as T-carrier, ISDN, and Synchronous Optical Network (SONET), a technique called bit stuffing is used to mark the location of the FLAG character. Recall that the FLAG character is 0x7E, or the bit sequence 01111110. With bit stuffing, the only time six 1 bits in a row are allowed is for the FLAG character as it is used to mark the start and end of a PPP frame. Throughout the rest of the PPP frame, if there are five 1 bits in a row, a 0 bit is inserted into the bit stream by the synchronous link hardware. Therefore, the bit sequence 111110 is stuffed to produce 1111100 and the bit sequence 111111 is stuffed to become 1111101. Therefore, six 1 bits in a row cannot occur except for the FLAG character when it is used to mark the start and end of a PPP frame. If the FLAG character does occur within the PPP frame, it is bit stuffed to produce the bit sequence 011111010. Bit stuffing is much more efficient than character stuffing. If stuffed, a single byte becomes 9 bits, not 16 bits, as is the case with character stuffing. With synchronous links and bit stuffing, data sent no longer falls along bit boundaries. A single byte sent can be encoded as either 8 or 9 bits, depending on the presence of a 11111 bit sequence within the byte.

PPP Maximum Receive Unit

The maximum-sized PPP frame, the maximum transmission unit (MTU) for a PPP link, is known as the Maximum Receive Unit (MRU). The default value for the PPP MRU is 1500 bytes. The MRU for a PPP connection can be negotiated to a lower or higher value using the Maximum Receive Unit LCP option. If an MRU is negotiated to a value lower than 1500 bytes, a 1500-byte MRU must still be supported in case the link has to be resynchronized.

PPP Multilink Protocol

The PPP Multilink Protocol (MP) is an extension to PPP that allows you to bundleor aggregate the bandwidth of multiple physical connections. It is supported by theWindows Server 2003 family and Windows XP Network Connections and theWindows Server 2003 family Routing and Remote Access service. MP takes multiple physical connections and makes them appear as a single logical link. For example, with MP, two analog phone lines operating at 28.8 Kbps appear as a single connectionoperating at 57.6 Kbps. Another example is the aggregation of multiple channels of an ISDNBasic Rate Interface (BRI) or Primary Rate Interface (PRI) line. In the case of a BRI line, MP makes the two 64-Kbps BRI B-channels appear as a single connection operating at 128 Kbps.

  More Info

MP is described in RFC 1991, which can be found in the Rfc folder on the companion CD-ROM.

MP is an extra layer of encapsulation that operates within a PPP payload. To identify an MP packet, the PPP Protocol field is set to 0x00-3D. The payload of an MP packet is a PPP frame or the fragment of a PPP frame. If the size of the PPP payload that would be sent on a single-link PPP connection, plus the additional MP header, is greater than the MRU for the specific physical link over which the MP packet is sent, MP fragments the PPP payload.

MP fragmentation divides the PPP payload along boundaries that will fit within the link's MRU. The fragments are sent in sequence using an incrementing sequence number, and flags are used to indicate the first and last fragments of an original PPP payload. A lost MP fragment causes the entire original PPP payload to be silently discarded.

MP encapsulation has two different forms: the long sequence number format (shown in Figure 2-5) and the short sequence number format. The long sequence number format adds 4 bytes of overhead to the PPP payload.

Figure 2-5: The Multilink Protocol header, using the long sequence number format.

The fields in the MP long sequence number format header are defined as follows:

Figure 2-6 shows the short sequence number format, which adds 2 bytes of overhead to the PPP payload.

Figure 2-6: The Multilink Protocol header, using the short sequence number format.

The short sequence format has only 2 reserved bits, and its Sequence Number field is only 12 bits long. The long sequence number format is used by default unless the Short Sequence Number Header Format LCP option is used during the LCP negotiation.

X 25

In the 1970s, a standard set of protocols known as X.25 was created to provide users with a standard way to send packetized data across a packet-switched public data network (PSPDN). Until X.25, PSPDNs and their interfaces were proprietary and completelyincompatible. Changing PSPDN vendors meant purchasing new Public Data Network (PDN) interfacing equipment. X.25 is an international standard, as specified by theInternational Telecommunications Union-Telecommunication sector (ITU-T).

X.25 was developed during a time when the telecommunication infrastructure was largely based on noisy copper cabling. A typical use for PSPDNs at that time was the communication of a dumb terminal with a mainframe computer. Errors in transmission because of noisy cabling could not be recovered by dumb terminal equipment. Therefore, X.25 was designed to provide a reliable data transfer service—an unusual feature for a Data Link Layer protocol. All data sent to the PSPDN using X.25 was reliably received and forwarded to the desired endpoint. The reliable service of X.25 typically is not needed for the communication of more intelligent endpoints using protocol suites such as TCP/IP. However, X.25 is still used as a WAN technology over which to send TCP/IP data because of its international availability.

As Figure 2-7 shows, X.25 defines the interface between data terminal equipment (DTE) and data circuit-terminating equipment (DCE). A DTE can be a terminal that does not implement the complete X.25 functionality; as such, it is known as a nonpacket mode DTE. A nonpacket mode DTE is connected to a DCE through a translation device called a packet assembler/disassembler (PAD). X.25 does not attempt to define the nature of the DCE-to-DCE communication within the PSPDN. These details are left to the X.25 vendor.

Figure 2-7: The X.25 WAN service, showing DTE, DCE, PAD, and the X.25 interface tothe PSPDN.

End-to-end communication between DTEs is accomplished through a bidirectional and full-duplex logical connection called a virtual circuit. Virtual circuits permit communication between DTEs without the use of dedicated circuits. Data is sent as it is produced, using the bandwidth of the PDN infrastructure more efficiently. X.25 can support permanent virtual circuits (PVCs) or switched virtual circuits (SVCs). A PVC is a path through a packet-switching network that is statically programmed into the switches. An SVC is a path through a packet-switching network that is negotiated using a signaling protocol each time a connection is initiated.

Once a virtual circuit is established, a DTE sends a packet to the other end of a virtual circuit using an X.25 virtual-circuit identifier called the Logical Channel Number (LCN). The DCE uses the LCN to forward the packet within the PDN to the appropriate destination DCE.

X.25 encompasses the Physical, Data Link, and Network Layers of the OSI model.

Although X.25 is defined at the Physical, Data Link, and Network Layers of the OSI model, relative to sending IP datagrams, X.25 is a Data Link and Physical Layer technology.

Typical packet sizes for X.25 PSPDNs are 128, 256, or 512 bytes. User information, such as IP datagrams that are beyond the packet size of the X.25 PSPDN, are segmented by X.25 and reliably reassembled.

X 25 Encapsulation

X.25 encapsulation can take two different forms:

Figure 2-8 shows the X.25 encapsulation for IP datagrams on a multiprotocol link.

Figure 2-8: X.25 encapsulation of IP datagrams for a multiprotocol link.

NLPID

For multiprotocol virtual circuits, the 1-byte NLPID field is present and set to 0xCC to indicate an IP datagram. For a single protocol virtual circuit, the NLPID field is not present. If the IP datagram is fragmented, the NLPID is fragmented along with the IP datagram.

PLP Header

The fields in the X.25 PLP header are defined as follows:

RFC 1356 sets the IP MTU for X.25 networks at 1600 bytes. However, most X.25 networks support only X.25 packet sizes of 128, 256, or 512 bytes. To accommodate the sending of a 1600-byte IP datagram over an X.25 network, X.25 fragments the IP datagram along boundaries that will fit on the X.25 network. A bit within the PTI field called the M-bit is used for fragment delimitation. Similar to the More Fragments flag in the IP header, the M-bit in the X.25 PLP header is set to 1 if more fragments follow, and set to 0 for the last fragment. Unlike IP fragmentation, X.25 fragmentation recovers from lost fragments.

LAPB Header and Trailer

The following fields are present in the LAPB header and trailer:

Frame Relay

When packet-switching networks were first introduced, they were based on existing analog copper lines that experienced a high number of errors. X.25 was designed to compensate for these errors and provide connection-oriented reliable data transfer. In these days of high-grade digital fiber-optic lines, there is no need for the overhead associated with X.25. Frame relay is a packet-switched technology similar to X.25, but without the added framing and processing overhead to provide guaranteed data transfer. Unlike X.25, frame relay does not provide link-to-link reliability. If a frame in the frame relay network is corrupted in any way, it is silently discarded. Upper layer communication protocols such as TCP must detect and recover discarded frames.

A key advantage frame relay has over private-line facilities, such as T-Carrier, is that frame relay customers can be charged based on the amount of data transferred, instead of the distance between the endpoints. It is common, however, for the frame relay vendor to charge a fixed monthly cost. In either case frame relay is distance-insensitive. A local connection, such as a T-1 line, to the frame relay vendor's network is required. Frame relay allows widely separated sites to exchange data without incurring long-haul telecommunications costs.

Frame relay is a packet-switching technology defined in terms of a standardized interface between user devices (typically routers) and the switching equipment in the vendor's network (frame relay switches).

Frame relay is similar to X.25 in the following ways:

However, frame relay differs from X.25 in the following ways:

Typical frame relay service providers currently only offer PVCs. The frame relayservice provider establishes the PVC when the service is ordered. A new standard foran SVC version of frame relay uses the ISDN signaling protocol as the mechanismfor establishing the virtual circuit. This new standard is not widely used in production networks.

Frame relay speeds range from 56 Kbps to 1.544 Mbps. The required throughput for a given link determines the committed information rate (CIR). The CIR is the throughput guaranteed by the frame relay service provider. Most frame relay service providers allow a customer to transmit bursts above the CIR for short periods of time. Depending on congestion, the bursted traffic can be delivered by the frame relay network. However, traffic that exceeds the CIR is delivered on a best-effort basis only. This flexibility allows for network traffic spikes without dropping frames.

Frame Relay Encapsulation

Frame relay encapsulation of IP datagrams is based on HDLC, as RFC 2427 describes. Unlike X.25, frame relay encapsulation assumes that multiple protocols are sent over each Frame Relay virtual circuit. IP datagrams are encapsulated with the NLPID header set to 0xCC and a Frame Relay header and trailer. Figure 2-9 shows the frame relay encapsulation for IP datagrams.

Figure 2-9: Frame relay encapsulation for IP datagrams, showing the Frame Relay header and trailer.

  More Info

HDLC, as the basis for frame relay encapsulation of IP datagrams, is described in RFC 2427, which can be found in the Rfc folder on the companion CD-ROM.

The fields in the Frame Relay header and trailer are defined as follows:

Frame Relay Address Field

The Frame Relay Address field can be 1, 2, 3, or 4 bytes long. Typical frame relay implementations use a 2-byte Address field, as shown in Figure 2-10.

Figure 2-10: A 2-byte Frame Relay Address field.

The fields within the 2-byte Address field are defined as follows:

The maximum-sized frame that can be sent across a frame relay network varies according to the frame relay provider. RFC 2427 requires all frame relay networks to support a minimum frame size of 262 bytes, and a maximum frame size of 1600 bytes, although maximum frame sizes of up to 4500 bytes are common. Using a maximum frame size of 1600 bytes and a 2-byte address field, the IP MTU for frame relay is 1592.

ATM

ATM, or cell relay, is the latest innovation in broadband networking and is destined to eventually replace most existing WAN technologies. As with frame relay, ATM provides a connection-oriented, unreliable delivery service. ATM allows for the establishment of a connection between sites, but reliable communication is the responsibility of an upper layer protocol such as TCP.

ATM improves on the performance of frame relay. Instead of using variable-lengthframes, ATM takes a LAN traffic protocol data unit (PDU) such as an IP datagram and segments it into 48-byte segments. A 5-byte ATM header is added to each segment. The 53-byte ATM frames consisting of the segments of the IP datagram are sent over the ATM network, which the destination then reassembles. The fixed-length 53-byte ATM frame, known as an ATM cell, allows the performance of the ATM-switching network to be optimized.

ATM is available today as a PVC or an SVC through an ATM-switched network. ATM has been demonstrated at data rates up to 9.6 gigabits per second (Gbps) using SONET, an international specification for fiber-optic communication. ATM is a scalable solution for data, voice, audio, fax, and video, and can accommodate all of these information types simultaneously. ATM combines the benefits of circuit switching (fixed-transit delay and guaranteed bandwidth) with the benefits of packet switching (efficiency for bursty traffic).

The ATM Cell

The ATM cell consists of a 5-byte ATM header and a 48-byte payload. The following are two types of ATM headers:

Figure 2-11 shows the ATM cell header format at either a public or private UNI.

Figure 2-11: The ATM header format that exists at the ATM UNI.

The fields in the ATM header are defined as follows:

Figure 2-12 shows the ATM cell header format at the public NNI.

Figure 2-12: The ATM header format that exists at the ATM NNI.

The only differences between the UNI and NNI headers are as follows:

ATM Architecture

The ATM architectural model (known as the B-ISDN/ATM Model) has three main layers, as shown in Figure 2-13.

Figure 2-13: The ATM architectural model, showing the three main layers and their sublayers.

Physical Layer

The Physical Layer provides for the transmission and reception of ATM cells across a physical medium between two ATM devices. The Physical Layer is subdivided into a Physical Medium Dependent (PMD) sublayer and Transmission Convergence (TC) sublayer.

The PMD sublayer is responsible for the transmission and reception of individual bits on a physical medium. These responsibilities encompass bit-timing, signal-encoding, interfacing with the physical medium, and the physical medium itself. ATM does not rely on any specific bit rate, encoding scheme, or medium. Various specifications for ATM exist for coaxial cable, shielded and unshielded twisted-pair wire, and optical fiber at speeds ranging from 64 Kbps through 9.6 Gbps.

The TC sublayer acts as a converter between the bit stream at the PMD sublayer and ATM cells. When transmitting, the TC sublayer maps ATM cells onto the format of the PMD sublayer (such as DS-3 or SONET frames). Because a continuous stream of bytes isrequired, idle cells occupy portions in the ATM cell stream that are not used. The receiver silently discards idle cells so they are never passed to the ATM layer for processing. The TC sublayer also is responsible for generation and verification of the HEC field for each cell, and for determining ATM cell delineation (where the ATM cells begin and end).

ATM Layer

The ATM Layer provides cell multiplexing, demultiplexing, and VPI/VCI routing functions. In addition, the ATM Layer is responsible for supervising the cell flow to ensure that all connections remain within their negotiated cell throughput limits. The ATM Layer can take corrective action so that those connections operating outside their negotiated parameters do not affect those connections that are obeying their negotiated connection parameters. Additionally, the ATM Layer ensures that the cell sequence from any source is maintained.

The ATM Layer multiplexes and demultiplexes, routes ATM cells, and ensures theirsequence from end to end. However, if a switch drops a cell because of congestion or corruption, it is not the ATM Layer's responsibility to correct the dropped cell through retransmission or to notify other layers of the dropped cell. Layers above the ATM Layer must sense the lost cell and decide whether to correct for its loss.

ATM Adaptation Layer

The ATM Adaptation Layer (AAL) is responsible for the creation and reception of 48-byte payloads using the ATM Layer on behalf of different types of applications. The AAL is subdivided between the Convergence sublayer (CS) and the Segmentation and Reassembly (SAR) sublayer. ATM adaptation is necessary to interface the cell-based technology at the ATM Layer, to the bit-stream technology of digital devices (such as telephones and video cameras), and the packet-stream technology of modern data networks (such as frame relay or LAN protocols, including TCP/IP).

Convergence Sublayer

The CS is the last place that an application block of data (also known as a PDU) has its original form before being handed to the SAR sublayer for division into 48-byte ATM payloads. The CS is responsible for an encapsulation that allows the application data block to be distinguished and handed to the destination application. The CS is further subdivided into two sublayers: the Common Part CS (CPCS), which must be implemented, and the Service Specific CS (SSCS), which might be implemented depending on the actual service. If the SSCS is not implemented, it does not add headers to the data being sent.

SAR Sublayer

On the sending side, the SAR sublayer takes the block of data from the CS (hereafter known as the CPCS PDU) and divides it into 48-byte segments. Each segment is then handed to the ATM Layer for final ATM encapsulation. On the receiving side, the SAR sublayer receives each ATM cell and reassembles the CPCS PDU. The completed CPCS PDU is then handed up to the CS for processing.

To provide a standard mechanism for the CPCS and SAR sublayers, the ITU-T has created a series of ATM Adaptation Layers as follows:

AAL5

AAL5 provides a way for non-isochronous, variable bit rate, connectionless applications to send and receive data. The data communications industry developed AAL5 as a straightforward framing at the CPCS that tends to behave like existing LAN technologies such as Ethernet. AAL5 is the AAL of choice when sending connection-oriented (frame relay) or connectionless (IP or IPX) LAN protocol traffic over an ATM network.

AAL5 Framing

Figure 2-14 shows the framing that occurs at AAL5.

Figure 2-14: AAL5 framing, showing the payload and the AAL5 trailer.

The fields in the AAL5 frame are defined as follows:

The SAR sublayer for AAL5 segments the CPCS PDU along 48-byte boundaries and passes the segments to the ATM Layer for encapsulation with an ATM header. On the receiving side, the SAR sublayer reassembles the incoming 48-byte ATM payloads and passes the result to the CPCS. The SAR uses the AAL5 Segmentation Flag field, the third bit in the PTI field, to indicate when the last 48-byte unit in a CPCS PDU is sent. On the receiving side, when the ATM cell is received with the AAL5 Segmentation Flag field set, the ATM Layer indicates this to AAL5 so that analysis of the full CPCS PDU can begin.

Sending an IP Datagram Over an ATM Network

The method of sending IP datagrams over an ATM network using AAL5 is known as classical IP over ATM, and is described in RFCs 1577 and 1626. To ensure compatibility with IP datagrams sent over a Switched Multimegabit Data Service (SMDS) network, another cell-based WAN technology, IP datagrams have a maximum size of 9180 bytes. Figure 2-15 shows IP datagram encapsulation using AAL5.

Figure 2-15: IP datagram encapsulation using AAL5.

  More Info

RFCs 1577 and 1626 describe classical IP over ATM. These can be found in the Rfc folder on the companion CD-ROM.

At the SAR sublayer, the CPCS PDU is segmented into 48-byte units that become the ATM payloads for a stream of ATM cells. When the last cell in the CPCS PDU is sent, the AAL5 Segmentation Flag field is set to 1. When the last cell is received, the receiver uses the CRC to check the validity of the bits in the CPCS PDU. If the CRC is valid, the Length field is used to discard the Pad field. The AAL trailer is stripped, and the end result is the originally transmitted IP datagram that is then passed to the IP layer for processing.

For a given ATM virtual circuit, IP datagrams must be sent one at a time. The cells of multiple IP datagrams cannot be mixed on the same virtual circuit. The ATM header contains no information to signify which cells belong to which CPCS PDU. ATM segmentation differs from IP fragmentation in this regard. With IP fragmentation, the Identification field serves to group all the fragments of the original IP datagram together. An IP router can send the fragments of different IP packets alternately without a reconstruction issue on the receiving side. With ATM segmentation, there is no fragment ID field or equivalent that can be used to differentiate CPCS PDUs.

Example of Sending an IP Datagram

Figure 2-16 shows an example of sending a 128-byte IP datagram across an ATM network using AAL5.

Figure 2-16: Example of sending an IP datagram over ATM using AAL5 encapsulation.

The AAL5 trailer with an 8-byte Pad field is added to the IP datagram. The 8 bytes of the Pad field make the entire AAL5 CPCS PDU 144 bytes, an integral multiple of 48. The resulting AAL5 CPCS PDU is then segmented into three 48-byte segments. Each 48-byte segment becomes the payload of an ATM cell sent in sequence to the destination ATM endpoint on the virtual circuit. When the last segment is sent, the AAL5 Segmentation Flag field is set to 1.

  Note

For the Windows Server 2003 family, ATM traffic captured with Network Monitor does not display the individual ATM cells or the ATM header. The ATM header displayed with Network Monitor contains a simulated source and destination MAC address and the VPI and VCI fields for the virtual circuit.

Multiprotocol Encapsulation with AAL5

When multiple protocols are sent over the same ATM virtual circuit, a protocol identifier is needed to differentiate the various Network Layer protocols.

  More Info

Multiprotocol encapsulation over ATM is described in RFC 1483, which can be found in the Rfc folder on the companion CD-ROM.

To add a protocol identifier to the CPCS PDU, the Sub-Network Access Protocol (SNAP) method used by IEEE 802.x networks is used. Figure 2-17 shows multiprotocol encapsulation over AAL5.

Figure 2-17: Multiprotocol encapsulation for AAL5, using the LLC and SNAP headers.

As described in Chapter 1, "Local Area Network (LAN) Technologies," the SNAP header consists of a Logical Link Control (LLC) header and a SNAP header. Within the LLC header, the Destination Service Access Point is set to 0xAA, the Source Service Access Point is set to 0xAA, and the Control field is set to 0x03. Within the SNAP header, the Organization Unique Identifier is set to 00-00-00 and the EtherType field is set to 0x08-00 for IP. When the ATM virtual circuit is created, both ATM endpoints negotiate the use of either single-protocol or multiprotocol AAL5 encapsulation.

Summary

Typical WAN technology encapsulations used by the Windows Server 2003 family and Windows XP provide delimitation, addressing, protocol identification, and bit-level integrity services. IP datagrams and ARP messages sent over point-to-point WAN links can be encapsulated using SLIP, PPP, or MP. IP datagrams and ARP messages sent over NBMA links such as X.25, frame relay, or ATM use the appropriate single or multiprotocol encapsulation.

Категории