Network Analysis, Architecture and Design, Second Edition (The Morgan Kaufmann Series in Networking)
11.5 Switching
Switching is forwarding information (e.g., cells, frames, packets) between segments of a network or subnetwork. This is usually done at the link or network layers but may occur at any layer in the network. For example, wavelengths of light can be switched at the physical layer in dense or wide wavelength division multiplexing (d/wWDM) networks. Bridging can be considered a subset of switching, typically applied at the link layer. There is an old statement "bridge when you can, route when you must" that can be modified to "switch when you can, route when you must." However, as we will see, routing is a vital interconnection mechanism that should be considered along with switching.
Switching can be viewed as a relatively simple way to achieve scalability and to optimize some flow models. Switches are a good choice when the network design is simple and the speed of switching is required. As the design becomes complicated (e.g., large amounts of growth expected, crossing administrative boundaries, connecting to other networks, a requirement to accommodate diversity in media or technologies), routers offer network-layer interconnectivity to help with this complexity. From this perspective, switches can be thought of as a first or intermediate step in interconnecting networks, leading to routing as the complexity increases.
When switching is required to support scalability in a design, you can use the capacity planning guidelines to determine at what point it will be needed. Switching can also be used to optimize flow models, particularly client-server, hierarchical client-server, and distributed-computing models. Switching helps optimize these models by supporting multiple concurrent traffic flows at media speeds through the switch. When a model indicates that there will be multiple concurrent flows, switching should be considered at the points where these flows converge. Consider, for example, a network segment to which the distributed-computing model applies, as in Figure 11.6.
The flows in this network segment are between neighbor computing devices and have strict performance requirements. Instead of having these flows share the same physical media, they can be interconnected with one or more switches (Figure 11.7), isolating the flows and reducing the required capacity across any particular media segment.
For example, in Figure 11.7, flows marked "a" are localized to a switch interface, and flows marked "b" are forwarded between interfaces. Whereas all flows in the shared-medium network are forwarded over the same physical network, the switch allows multiple concurrent flows to be forwarded over different physical interfaces, improving performance. When the network segment is local, such as in computer room, building, or campus environments, a switch provides a simple, fast, and cost-effective solution to handle multiple concurrent flows. Although a router could also be used in these situations, it can be a more complex solution.
If the aforementioned distributed-computing example was extended across a WAN, the end-to-end delay and capacity characteristics across the WAN, as well as possible requirements for external connectivity (e.g., connecting to a service provider), security, and network management, may be better satisfied with routers, as shown in Figure 11.8.
For this network segment, switches will optimize the performance of flows within each switch-local computing group and routers provide performance, as well as security and management support across the WAN. If there were any differences in the delay characteristics of switches and routers used in this network, they would likely be masked by the end-to-end delay characteristics of the WAN.
In this chapter, we will discuss some of the important characteristics of switching at the physical, link, and network layers.
11.5.1 ATM Switching
There has been a great deal of debate about where ATM is most useful, in the backbone or the core network, in the WAN, or at the access network, directly to the desktop. When ATM first hit the general networking world (for many people, at the 1992 Interop conference), it appeared the probable successor to existing technologies. Some even predicted that ATM would replace IP. It did have a serious impact on switched multimegabit data service and appeared to threaten frame relay, Ethernet, Token Ring, and fiber distributed data interface. There has been some backtracking since then, reducing the scope of ATM to network backbones and WANs. This is partly due to the inherent complexity and overhead of ATM, the lack of large operational ATM networks (relative to the number of new networks based on existing technologies), and the slow growth in services-based networking. What then are the roles of ATM today and in future networks?
It is important to note that a reduction in the scope of ATM networks should have been expected. Proponents of ATM tried to accomplish quite a bit in a short period, for a variety of reasons. But this does not mean that they were heading in the wrong direction. What many network architects, designers, and developers have known for a long time is that each technology, service, and interconnection mechanism has its purpose and no single one is the "right" answer. Like the home builder who has a variety of specialized and general tools or the software developer who relies on many different algorithms, techniques, or specialized code fragments, network architects and designers use various network technologies and interconnection mechanisms and it is good practice to be able to use all available tools and to know where and when to apply them. ATM is no exception.
ATM can be used in backbone networks, in WANs, and in the access network to the desktop, depending on the requirements of the network. This is where the requirements and flow analyses, as well as the component and reference architectures, can give some insight into the design, specifically in identifying where each tool (such as ATM) may be applied. In addition to requirements, flows, and architectures, we need to understand the trade-offs of using ATM and its implications to desktop, backbone, and WAN networks, particularly if it is applied end-to-end between applications.
There are some general trade-offs with ATM. One is the ability to support end-to-end services versus added complexity. Another is perceived high performance versus overhead.
ATM can provide support for end-to-end services, and requirements for this support are growing. As we saw at the beginning of this book, services are an integral part of the system and are evolving as a requirement in some networks. However, services can add complexity to the network design. For example, to support service end-to-end between users, applications, or devices, state information has to be kept about the traffic flows using that service, characteristics of the services themselves (including performance), and network resources requested and provided. Services to different flows are likely to be merged within the network (this should be evident in the flow analysis), and this merging has to be managed by the network. The dynamic nature of data networks implies that flows will be interrupted, halted, reestablished, and rerouted along different paths, and any state information associated with these flows will need to be learned by network devices along any new paths for the flows. The process of learning state information for flows as they move through the network is resource intensive.
Although the cost of complexity can be high for ATM, the potential benefits can be great. Supporting service and performance end-to-end through the network may be the only (or best) way to support emerging applications. As this generation of services-based networking matures, end-to-end service and performance will be a distinguishing feature of successful networks.
ATM can also provide high levels of capacity, and the combination of ATM and synchronous optical network (SONET) gives a known path for performance upgrades. ATM is supported at various SONET optical carrier (OC) levels (including concatenated service): OC-3, OC-12, OC-48, and OC-192. There is capacity overhead associated with ATM, as with all technologies and protocols, but it is relatively high with ATM. This is due to its small, fixed cell sizes and segmentation and reassembly (SAR) functions. At 53 bytes, with 5 bytes of header and 48 bytes of data, the initial overhead is 5 (53, or 9.4%. When SAR and ATM adaptation layers (AALs) are included, this overhead can grow to 5 bytes of header and 8 bytes of AAL, or (5 + 8)/53 = 24.5%. And this does not include the overhead from SONET or other protocols.
Example 11.1: Protocol Overhead Analysis
An example protocol overhead analysis for SONET, ATM, AAL5, and Ethernet bridging using RFC 1483 (also known as 1483 or LLC/SNAP) is presented. When all of the protocol overheads are added up, it is a significant portion of the overall capacity of an OC-3c link. Protocol overhead is important to understand when choosing interconnection mechanisms in that it affects the overall performance and efficiency of the network.
Each protocol layer adds overhead, by encapsulating higher-layer protocols with header and trailer information. This section estimates protocol overhead at various places in the network. Based on studies of Internet traffic flows (see vBNS reports at www.caida.org or ee.lbl.gov for more information), most traffic flows are less than 1000 bytes in size, with local maximums at around 64 and 560 bytes. For this protocol overhead analysis, we will use 556 bytes for the traffic (data) flow size. This corresponds to a commonly used maximum transmission unit (MTU) size of 576 bytes (with TCP header added). This data size also includes any protocol overhead from the application.
At the users' access network, various protocols are added to the traffic flow. The following encapsulations are applied, starting with the transport layer. At the transport layer, TCP adds 20 bytes of overhead. This overhead consists of port information, sequence numbers, TCP window information, and a checksum. The protocol overhead of TCP is 20 bytes 556 bytes, or 3.60%.
At the network layer, an IP header is added. This header adds another 20 bytes of overhead, consisting of Version, Length, Type of Service, Fragmentation, Flags, and IP address information. The IP header brings the overhead to 40 bytes 556 bytes, or 7.19%.
Since the user is sending and receiving traffic, the PPP and PPPoE sessions are expected to be established and at steady state. Steady-state PPP adds 2 bytes of protocol type information when used with PPPoE. This brings the total overhead to 42 bytes 556 bytes, or 7.55%
The PPPoE header provides Version, Type, Code, Session ID, and Length information, adding a minimum of 6 bytes as header. More information, in the form of PPPoE TAGs, may be added; however, it is assumed that the PPP/PPPoE session is steady state and not using TAGs to modify the session. With an additional 6 bytes of overhead, the total overhead is now 48 bytes 556 bytes, or 8.63%.
At this point, the Ethernet encapsulation is added, with its 12 bytes of source and destination addresses, 2 bytes of protocol type, and 4 bytes of checksum. This increases the overhead by 18 bytes, so the overhead is now 66 bytes 556 bytes, or 11.87%.
This is the amount of encapsulation that user traffic will encounter across the network. For a flow of 556 bytes, all of the flow can be encapsulated in a single Ethernet frame (which can accommodate up to 1500 bytes of subscriber data), with 66 bytes of TCP, IP, PPP, PPPoE, and Ethernet overheads. This analysis does not consider any padding that may occur at some of the protocol layers, such as for PPP.
More protocol encapsulations occur as data enter the SP Backhaul network. With 1483 encapsulation, used to bridge Ethernet frames across an ATM network, 12 bytes of overhead is added, making the total overhead for a 556-byte flow 78 bytes 556 bytes, or 14.03%.
The 1483 frames are passed to AAL5, where 8 bytes of AAL5 information is added. The resulting frame must be a multiple of 48 bytes (the size of an ATM cell payload), so padding is added to bring the frame to the appropriate size. For the example flow, the 1483 frame size is 634 bytes + 8 bytes of AAL5 information = 642 bytes. This is not an integer multiple of 48 bytes, so 30 bytes of padding is added to bring the frame to 672 bytes, which equates to fourteen 48-byte ATM payloads. The resulting overhead for this flow is now 78 bytes + 8 bytes (AAL5 information) + 30 bytes (padding) = 116 bytes, which is 116 bytes 556 bytes, or 20.86% total overhead.
ATM header information, consisting of 5 bytes of ATM addressing, payload type, priority, and checksum, are added to each of the ATM payloads. For the 14 ATM cells, the ATM overhead is 5 14 = 70 bytes, and total overhead is now 186 bytes 566 bytes, or 33.45%.
ATM cells are transported as either DS3 or SONET OC-3c payloads. For DS3 with PLCP framing, 12 ATM cells are inserted into a PLCP frame, with 48 bytes of PLCP information and 12 to 13 nibbles of trailer. For the 14 ATM cells in the flow, two PLCP frames will be needed. Assuming that no other traffic flows will be inserting ATM cells into the second PLCP frame and that a 12-nibble (6 byte) trailer is used, the PLCP overhead will be 2(48 + 6) = 108 bytes. The total overhead for all protocol encapsulations—TCP, IP, PPP, PPPoE, 1483, AAL5, ATM, and DS3—is then 294 bytes 556 bytes, or 52.88% for a 556-byte flow.
ATM cells can also be transported via SONET, using an STS-3c SONET payload envelope (SPE) and OC-3c frame. SONET has three types of overhead: section overhead (SOH), line overhead (LOH), and path overhead (POH). SOH, LOH, and POH use 81 + 3 9 = 108 bytes of the 2430-byte OC-3c frame. If there are no other traffic flows that contribute to the SPE, then the SONET overhead is 108 bytes. The total overhead for all protocol encapsulations—TCP, IP, PPP, PPPoE, 1483, AAL5, ATM, and SONET—is then 294 bytes 556 bytes, or 52.88% for a 556-byte flow.
Other interesting traffic flows that should be considered here include small (16 to 64 kb/s) voice over IP (VoIP) flows, large (4 KB or greater) TCP flows, and User Datagram Protocol (UDP) flows of various sizes.
ATM options focus on interconnecting ATM with traditional technologies and providing (what is traditionally) network-layer functions with ATM. LANE can connect other LAN technologies together, such as Ethernet and Token Ring, as well as connecting them to ATM networks. LANE emulates the broadcast environments of Ethernet and Token Ring across the ATM network. To accomplish this, it provides support for the characteristics of these technologies, including address resolution, broadcasting, and their connectionless nature. To the network layer, emulated LANs appear as a bridged network.
There are three mechanisms to introduce LANE into a network design. First, it can be used to introduce ATM into a backbone or a core network while minimizing the impact to users by not making changes to their device configurations or network interface cards (NICs). This can be quite important in a network that has a large existing base of devices with Ethernet and/or Token Ring NICs. Second, it can be used as an intermediate step in migrating to other ATM options, such as 2225 (based on RFC 2225, formerly RFC 1577) networking, native ATM, NHRP, or MPOA. By introducing ATM into the backbone via the first mechanism, we can migrate devices from LANE to another ATM mechanism. Third, we can use LANE to tie together groups of Ethernet or Token Ring devices, as a permanent solution when the devices do not need more than Ethernet or Token Ring connectivity or performance.
A major reason for using LANE is to keep existing Ethernet and Token Ring networks, as well as their associated NICs, in place. This saves on the investment of these NICs, which can be a substantial portion of network expense. ATM can be brought into a network, for example, as a backbone technology, and LANE used to connect devices as Ethernetor Token Ring-attached devices. In this way, LANE can help minimize the impact of network modifications to users.
However, these benefits come at a cost. A trade-off in being able to capitalize on an existing investment in the traditional technologies by using LAN as an interconnection mechanism is in the increased complexity of the network. LANE emulates Ethernet and Token Ring networks, and the mechanisms used to provide this emulation can result in events on the network that are often difficult to understand or solve. This is primarily due to added components, such as LAN emulation clients (LECs), LAN emulation servers (LESs), LAN emulation configuration servers (LECSs), broadcast and unknown servers (BUSs), and special multicast servers (SMSs). Other complexities include the virtual-circuit connectivity between components and the resulting ambiguity in the underlying infrastructure. Thus, more knowledge about the structure, state, and behavior of the network is required for proper operation. Such knowledge can be provided through the use of LANE-specific network management software and hardware or through skilled, specialized network staff.
On the other hand, being able to continue to use existing networks while integrating ATM into the backbone or WAN can save substantial amounts of time and money while allowing those who will operate and manage the network to use their existing knowledge of traditional technologies. Therefore, if a design has a significant number of existing networks based on Ethernet and Token Ring, LANE should be one of the considerations as an interconnection mechanism, with the caveat that there will be complexity and cost in terms of required LANE and network-knowledgeable staff.
These trade-offs can be applied to the design goals discussed earlier. If a primary design goal, or evaluation criterion, is minimizing costs for the network, then LANE will be an option if you have a large base of existing Ethernet-and Token Ring-attached devices. If the primary goal is maximizing ease of use and manageability of the network, LANE is less clear as an option; although it will provide ease of use to users, since their interfaces to the network will look the same, manageability of the network will likely be more difficult with the added complexity of LANE. When network performance is a design goal, LANE can be seen as an intermediate step in improving performance for users and their applications and devices. If users were previously on a shared Ethernet, LANE can improve their performance but will not improve performance beyond switched Ethernet or Token Ring performance. Where LANE can make a difference is in the backbone network, as we can bring in a highperformance backbone (e.g., at OC-3c or OC-48c) and attach devices with LANE and then migrate them as needed from LANE to either a direct ATM connection or another high-performance technology (e.g., Gigabit Ethernet, high-performance parallel interface [HiPPI], or fiber channel) connected to the ATM backbone.
Classical IP over ATM networks, based on the specifications of RFC 2225, interconnect networks via ATM and IP in a more straightforward albeit IP-centric fashion. RFC 2225 networks treat ATM as another underlying technology for IP, and interconnections are done with routing. This is changing, as with LANE, by having the option of a cut-through path as the ATM layer. A trade-off in RFC 2225 networking is not taking full advantage of ATM, along with its potentially strange interconnectivity patterns, versus the simplicity of interconnecting at IP.
RFC 2225 does not take full advantage of the switching and service capabilities of ATM but instead forces ATM to mold itself to an IP structure. If your existing network or your network design is based on a large IP network, then changing IP addressing, routing, and structure may be difficult. Starting with RFC 2225 can be an intermediate step toward integrating ATM into such networks and can be used in conjunction with a plan to migrate from RFC 2225 to another mechanism that takes greater advantage of ATM. RFC 2225 can be useful in areas in which multiple technologies meet, such as at backbones or enterprise-level NAPs.
In Figure 11.9, three logical IP subnets (LISs) are configured over a common ATM network. Recall the behavior of IP networks from our discussion of the basics of IP (at the beginning of this chapter). For each LIS, all hosts and routers must be able to connect to each other, and traffic between LISs must be sent to a router between them. The scalability of ATM is reduced in this environment, unless broadcast and address resolution mechanisms are added to the network. Basically, ATM provides the link-layer infrastructure for an IP subnet, and traffic flows between subnets are routed. This can result in strange traffic flows when multiple LISs are built on a large-scale ATM infrastructure. For example, if LISs cross a WAN, flows that would normally stay within a LAN may cross the WAN to be routed (Figure 11.10). In a situation like this, it is clear that a direct-switched path is preferable to the longdistance-routed path.
Although there are times when ATM switching is optimized over routing in this example, it is not permitted. If a cut-through path is allowed, however, RFC 2225 begins to look more like NHRP or MPOA in its behavior.
A key benefit to a strictly IP-interconnected design is that IP is relatively well understood. Since ATM is treated as yet another link-layer technology, any reliance on ATM for interconnectivity is minimized. This makes it easier to replace ATM in the link layer with frame relay, fiber channel, HiPPI, or others, as necessary.
Switching provides an interconnection mechanism for improving the overall performance characteristics of the network and, when designed properly, can optimize traffic flows. As the design process becomes complicated, however, routing should be considered part of the interconnection strategy.
Категории