Cisco Multiservice Switching Networks
Because of the exorbitant expense of dedicated optical transmission lines such as OC-3 and OC-12, most PNNI network designers opt for peripheral nodes to feed high-capacity core-switching nodes. Peripheral nodes offer end customers an array of ATM uplink speeds, whereas high-capacity switches such as the MGX 8950 do not. These peripheral nodes attach to the core using lower-speed electrical transmission lines such as E3 and T3 or some fractional optical bandwidth via VPC. Although the peripheral nodes would connect to only a single core switch, or two core switches for redundancy, all nodes in the core switch complex typically require a redundant full mesh of links providing direct any-to-any connectivity. This mesh minimizes switching delay and maximizes failure recovery. A full mesh of links between the core switches is not required if an acceptable percentage of simultaneous peripheral traffic entering the core switch complex can successfully exit the complex and reach its destinations without being blocked because of lack of core bandwidth. A higher percentage of simultaneous peripheral traffic successfully passing through the core switch complex requires higher-bandwidth links between core switches. Higher-bandwidth links lessen the number of blocked ATM PVCs because of bandwidth starvation. However, these links might be an unjustifiable cost if the instantaneous amount of end-customer traffic does not frequently tax the bandwidth limits of the core switch complex. Inserting high-capacity switches between the existing core switch complex and the ring of peripheral switches allows the MSS PNNI network to expand. These newly inserted switches act as aggregation switches for the peripheral nodes. If the peripheral nodes are redundantly linked to the nodes in the core, the new aggregation nodes can be inserted with minimal end-user downtime because of PNNI's automatic rerouting capabilities. |
Категории