ActionScripting in Flash MX

7.1 Optimizing network bandwidth

Traffic control is a major technical problem on public networks such as the Internet. There is a severe lack of skilled resources, and service providers are often struggling to maintain equipment that is either underpowered or sub-optimal in its configuration. There have been many examples of misconfigured routers diverting traffic accidentally onto congested paths, and often these problems take hours to resolve due to poor diagnostics or lack of centralized management. It is also not unheard of for service providers to deploy routers that do not have enough processing power or memory to effectively manage their own routing tables (especially if these devices have been in situ for some years and have not been upgraded). Regardless of whether the network is public or private, many of the challenges we must face in traffic engineering are the same.

In this section we will look at ways in which we can improve bandwidth utilization and overall performance on an internetwork. These techniques focus primarily on ways to keep unwanted traffic off the backbone and any other critical network segments. Even though intelligent devices such as bridges, switches, and routers already perform traffic segmentation, there are many cases where you can improve matters through the use of packet filtering, route filtering, spoofing, or other protocol management tools. Routers are becoming ever more sophisticated at dealing with traffic intelligently and may offer advanced features such as priority queuing, combinatorial metrics, and proxy services. All of these features are described in this chapter.

7.1.1 Optimizing routing

Routing protocols and related addressing issues were described extensively in Chapters 2 through 4; this section summarizes techniques already discussed in those chapters and provides references where appropriate. There are a number of key points to consider in optimizing routing and switched networks, including the following:

In recent years new techniques have emerged to make more efficient use of wide area links, including QoS-Based Routing, Constraint-Based Routing (CBR), and MultiProtocol Label Switching (MPLS).

7.1.2 Eliminating unwanted traffic

Networks can generate a surprising level of unwanted traffic—that is, traffic present on networks or subnetworks that has no place being there. Often this is symptomatic of legacy applications and routing protocols that rely heavily on broadcasts, or it may be due to suboptimal configuration of networking devices and protocol stacks. With bandwidth becoming an increasingly critical resource, it is important that the network designer eliminate unwanted traffic both during the design phase and after implementation.

The process of elimination is essentially a three-phase strategy, in the following order:

In an ideal world we would approach this problem in the order presented, starting by eliminating problems at the source (by designing unwanted protocols out of the configuration altogether). In practice the biggest initial gains tend to be from Phase 2 and Phase 3, since the bulk of unwanted traffic is not due to Phase 1 issues (however, it makes little sense to install packet filters throughout the network to stop a protocol that can be discarded at the source).

Disabling unnecessary protocols

Networked systems usually have default settings that are designed to assist the network administrator in deploying systems quickly (i.e., a network metaphor for plug and play). For example, when enabling a routing protocol via the command line, this may enable that protocol on all IP interfaces. While this is very useful at the commissioning phase, it can lead to significant levels of unwanted traffic later, especially if one imagines this problem multiplied by hundreds of nodes. Key things to look out for are as follows:

The standard configuration for network devices should be designed to disable all unnecessary protocols and protocol activities per interface. Unfortunately, for legacy installations there is no easy way to determine which settings are redundant after the event. In this case you may have no choice but to take detailed packet traces and search for anomalous events (such as protocols and messages appearing on networks where they are clearly not required). Broadcasts and multicasts are always worthy of close examination, since these are particularly significant in degrading network system performance. You will also need to review the configuration of each class of device.

Reducing broadcast traffic

Legacy routing protocols and LAN applications often communicate using broadcasts. There are also emerging applications and protocols that support multicast operation. As we saw earlier, broadcasts and multicasts may be forwarded to network segments where they are not required (often referred to as broadcast radiation and multicast leakage). Broadcast and multicast handling is also fundamentally different from unicast handling from the host perspective (with broadcast packets the network device driver must interrupt the processor for each broadcast packet received, even though the majority of packets received will be discarded). The level of background broadcast traffic, therefore, directly affects the performance of all network-attached hosts. Relatively modest broadcast rates (up to 100's per second) can cause serious system degradation, even though link utilization is not significant. This area of the design, therefore, requires careful examination.

At extreme levels of broadcasts (1,000's per second) networks can suffer what is colloquially referred to as a broadcast storm, and flat bridged/ switched networks are particularly vulnerable. Although rare, these events can completely disable a network, forcing a major reboot. The causes of broadcast storms are typically the result of misconfiguration (e.g., inconsistencies or errors in filter settings), major hardware or software malfunction (e.g., a PC card throwing out spurious frames), or a serious design flaw (e.g., a loop). It is important, therefore, that the network designer implements ways of containing the scope of these packet types. Broadcasts are especially difficult to block on flat networks, since filters may need to extend well into the user data part of the frame to differentiate applications. In switched networks multicasts may be dealt with using multicast-aware switches (using protocols such as GARP and GMRP), and broadcasts can be contained to some extent using VLANs [1]). We will now examine some of the common sources of broadcast and multicast traffic.

Sources of multicasts and broadcasts

There are many sources of broadcasts and multicasts; the main culprits are LAN-based servers, workstations, routers, and LAN-originated applications, as follows:

In order to constrain broadcast radiation, it is important that flat networks be broken up using routers (or VLAN-enabled switches). Where possible, limit the size of RIP networks to no more than a dozen routers, especially where low-speed WAN links are present. On bridges and switches, if possible use level-2 packet filters to discard multicast traffic from segments where this traffic is not required. Multicast-aware switches (e.g., those supporting GARP, GMRP, and GVRP) can also help control this traffic more efficiently. For further information about the IPX protocol suite, refer to [3]; for further information about the AppleTalk suite, refer to [4].

Filtering techniques

Packet filtering is a traffic management technique widely used in routers, bridges, and switches for reducing intersite or interdomain traffic. Filtering is often employed to keep unwanted traffic off the backbone and to preserve bandwidth on heavily utilized segments. Filters can also be used to implement basic security measures. Note that the term Access Control List (ACL) is effectively synonymous with packet filtering (although ACL is Cisco terminology). Even so, the implementation of packet filters differs widely across vendor platforms; you cannot assume that the same syntax or functionality is available throughout a multivendor network. This can lead to policy implementation issues and inconsistencies. Several smaller vendors have implemented a Cisco-like Command-Line Interface (CLI) to help simplify maintenance (based on the assumption that most internetwork engineers will already be familiar with this CLI).

Packet filters

A packet filter is a set of one or more logical rules that are examined whenever a new packet arrives at an interface. If an incoming packet meets one or more of these rules, then a set of actions are invoked, such as accepting the packet, discarding the packet, logging its occurrence, alerting an NMS, and so on. Depending upon the platform (bridge, switch, router, firewall, etc.), filters may be available at Layer 2, Layer 3, Layer 4, or even application layers. Typical capabilities include the following:

Consult the vendor documentation to assess the facilities of the products on your network.

Examples

A modern multiprotocol router is likely to support filtering at several layers of the protocol stack, as follows:

The following chart lists some other common protocol types (in hexadecimal) that you may wish to include in Layer 2 access lists.

Protocol

Type

IP

0800

DEC MOP

6001,6002

DEC LAT

6004

ARP

0806

Novell

8037, 8038

AppleTalk

809b

The ability to discard frames by protocol type is a powerful tool but may lack sufficient granularity for some scenarios (e.g., it does not allow you to differentiate TCP services from UDP services).

The field called established refers to the SYN bit in the TCP header, which is set whenever a TCP station wishes to initiate a new connection as part of a three-way handshake.

Issues with packet filters

There are several limitations of packet filtering, especially when used in large internetworks, including the following:

Policy management systems enable multivendor ACLs to be centrally managed and distributed. In this way packet filters can be used as part of an overall security policy.

7.1.3 Compression techniques

Data compression techniques, when applied to networking, are most effective when there are patterns of repetition in data flows. Fortunately, data networking offers two major areas for data compression techniques to attack, as follows:

Telnet is quite inefficient; user keystrokes are typically transmitted as padded 64-byte packets. HTTP uses a large number of small packets, resulting in high overhead per unit of data transferred. All of this overhead greatly reduces the real data throughput of expensive wide area links and should be optimized if possible, especially on low-speed links.

Compression ratio and data expansion

The term compression ratio is a relative measure of the amount of data input versus the amount of data output, expressed as follows:

The term "compression factor" is the inverse of the compression ratio. Currently the efficiency of the modern compression techniques results in compression ratios of anywhere between 0.5 and 0.02.

A compression ratio greater than one indicates negative compression (i.e., expansion of the original data). This can occur where the original data are either random or have already been compressed (in which case these data will appear almost random, since, by definition, many of the patterns of repetition will have been removed). If you compress data at several levels in the protocol stack then this may actually result in more data being generated than intended (termed data expansion). For example, a JPEG image sent over a WAN link may expand if V.42bis compression is used, due to additional symbol data being produced. Fortunately, LZ-based algorithms are precise enough to allow determination of the worst-case maximum expansion size. For example, Stac LZS, described shortly, specifies a maximum expansion size of 12.5 percent over the original data size.

Compression techniques used in networking

There are a number of standards-based and proprietary techniques that have historically been used to optimize wide area bandwidth; these include the following:

Proprietary techniques require both routers to be from the same vendor and are largely dying out with the advancement of more effective standards-based technologies. The majority of WAN devices for use on low-speed links (i.e., modems, routers, FRADs, etc.) now support the PPP protocol. Virtually all PPP implementations support TCP header compression, and many support variations of LZ (such as Stac or V.42bis) as a negotiable option (via PPP's Compression Control Protocol (CCP) [10]). Reference [11] provides an excellent discussion of generic compression technology.

Stac LZS compression over PPP

All dictionary-based compression methods are based on the work of J. Ziv and A. Lempel back in the late 1970s (referred to as LZ77 and LZ78). Stac Electronics developed an enhanced compression algorithm called Stac LZS, based on work published in [6]. Stac LZS is commonly used on low-speed (under 2 Mbps) wide area links as a standard negotiable option with the PPP protocol. The LZS algorithm is designed to compress most file types as efficiently as possible (even string matches as short as two bytes are effectively compressed). It uses the previously examined input data stream as a dictionary, and the encoder uses a sliding window [11], which comprises a look-ahead buffer and the current dictionary. LZS supports single or multiple compression histories, and with circuit-based technologies such as Frame Relay the latter can significantly improve the compression ratio of a communications link by associating separate compression histories with separate virtual circuits. As indicated previously, the maximum expansion of Stac LZS is specified as 12.5 percent in [7]. This has implications on a point-to-point link for the Maximum Receive Unit (MRU) size, as follows:

For further information, the interested reader is referred to [7].

Software or hardware compression

Compression may be software or hardware based and typically operates on a point-to-point basis, with both devices at either end of a link compressing and uncompressing data. The major router vendors all offer compression as part of their software. Software compression is very CPU intensive, and if you enable compression on several WAN links, you should monitor CPU utilization to ensure that sufficient resources are available (for this reason vendors often limit the number of interfaces and speeds they will support, particularly on lower-specification access routers). Several vendors now offer plug-in modules that perform hardware compression and offload this activity from the main processor. Standalone hardware devices are also available that sit between the router and the WAN link (behind the NTU or CSU/ DSU in the United States [1]). These devices generally offer good compression ratios (typically in the range of 2:1 to 4:1). Many of the emerging VPN devices also offer data compression, although these devices may be less efficient if data are already encrypted (since this tends to scramble any repetition).

Design guidelines

There are a few caveats with all compression techniques when used in real network designs, as follows:

Note that some ISPs may be unwilling to enable compression on their CPE equipment, since it offers them no capability to compress that traffic further over more expensive backbone links (and you are effectively not given the opportunity to use more bandwidth than provided). Check with your provider.

Категории