CCNP: Building Cisco Multilayer Switched Networks Study Guide (642-811)
Remember the saying, 'Everything I need to know I learned in kindergarten'? Well, it appears to be true. Cisco has determined that following the hierarchical model they have created promotes a building-block approach to network design. If you did well with building blocks in your younger years, you can just apply that same technique to building large, multimillion-dollar networks. Kind of makes you glad it's someone else's money you're playing with, doesn't it?
In all seriousness, Cisco has determined some fundamental campus elements that help you build network building blocks:
Switch blocks Access layer switches connected to the distribution layer devices.
Core blocks Support of multiple switch blocks connected together with either 4000, 6500, or 8500 switches.
Within these fundamental elements, there are three contributing variables:
Server blocks Groups of network servers on a single subnet
WAN blocks Multiple connections to an ISP or multiple ISPs
Mainframe blocks Centralized services to which the enterprise network is responsible for providing complete access
By understanding how these work, you can build large, expensive networks with confidence (using someone else's money). After the network has been built, you need to allow the switches to talk to each other to allow for redundancy and to route around outages. We will cover these topics later in this section after the blocks are discussed.
Switch Block
The switch block is a combination of layer 2 switches and layer 3 routers. The layer 2 switches connect users in the wiring closet into the access layer and provide 10Mbps or 100Mbps dedicated connections; 2950 Catalyst switches can be used in the switch block.
From here, the access layer switches connect into one or more distribution layer switches, which will be the central connection point for all switches coming from the wiring closets. The distribution layer device is either a switch with an external router or a multi-layer switch. The distribution layer switch then provides layer 3 routing functions, if needed.
The distribution layer router prevents broadcast storms that could happen on an access layer switch from propagating throughout the entire internetwork. The broadcast storm would be isolated to only the access layer switch in which the problem exists.
Switch Block Size
To understand how large a switch block can be, you must understand the traffic types and the size and number of workgroups that will be using them. The number of switches that can collapse from the access layer to the distribution layer depends on the following:
-
Traffic patterns
-
Routers at the distribution layer
-
Number of users connected to the access layer switches
-
Distance VLANs must traverse the network
-
Spanning tree domain size
If routers at the distribution layer become the bottleneck in the network (which means the CPU processing is too intensive), the switch block has grown too large. Also, if too many broadcasts or multicast traffic slow down the switches and routers, your switch blocks have grown too large.
Note | Having a large number of users does not necessarily indicate that the switch block is too large; too much traffic going across the network does. |
Core Block
If you have two or more switch blocks, the Cisco rule of thumb states that you need a core block. No routing is performed at the core, only transferring of data. It is a pass-through for the switch block, the server block, and the Internet. Figure 1.8 shows one example of a core block.
The core is responsible for transferring data to and from the switch blocks as quickly as possible. You can build a fast core with a frame, packet, or cell (ATM) network technology. The Switching exam is based on an Ethernet core network.
Typically, you would have only one subnet configured on the core network. However, for redundancy and load balancing, you could have two or more subnets configured.
Switches can trunk on a certain port or ports. This means that a port on a switch can be a member of more than one VLAN at the same time. However, the distribution layer will handle the routing and trunking for VLANs, and the core is only a pass-through after the routing has been performed. Because of this, core links do not carry multiple subnets per link; the distribution layer does.
A Cisco 6500 or 8500 switch is recommended at the core, and even though only one of those switches might be sufficient to handle the traffic, Cisco recommends two switches for redundancy and load balancing. You could consider a 4000 or 3550 Catalyst switch if you don't need the power of the 6500 or the 8500.
Collapsed Core
A collapsed core is defined as one switch performing both core and distribution layer functions; however, the functions of the core and distribution layer are still distinct. The collapsed core is typically found in a small network.
Redundant links between the distribution layer and the access layer switches, and between each access layer switch, can support more than one VLAN. The distribution layer routing is the termination for all ports.
Figure 1.9 shows a collapsed core network design.
In a collapsed core network, Spanning Tree Protocol (STP) blocks the redundant links to prevent loops. Hot Standby Routing Protocol (HSRP) can provide redundancy in the distribution layer routing. It can keep core connectivity if the primary routing process fails.
Dual Core
If you have more than two switch blocks and need redundant connections between the core and distribution layer, you need to create a dual core. Figure 1.10 shows an example dual-core configuration. Each connection would be a separate subnet.
In Figure 1.10, you can see that each switch block is redundantly connected to each of the two core blocks. The distribution layer routers already have links to each subnet in the routing tables, provided by the layer 3 routing protocols. If a failure on a core switch takes place, convergence time will not be an issue. HSRP can be used to provide quick cutover between the cores. (HSRP is covered in Chapter 9, 'QoS and Redundancy.')
Core Size
Routing protocols are the main factor in determining the size of your core. This is because routers, or any layer 3 device, isolate the core. Routers send updates to other routers, and as the network grows, so do these updates, so it takes longer to converge or to have all the routers update. Because at least one of the routers will connect to the Internet, it's possible that there will be more updates throughout the internetwork.
The routing protocol dictates the size of the distribution layer devices that can communicate with the core. Table 1.2 shows a few of the more popular routing protocols and the number of blocks each routing protocol supports. Remember that this includes all blocks, including server, mainframe, and WAN.
Routing Protocol | Maximum Number of Peers | Number of Subnet Links to the Core | Maximum Number of Supported Blocks |
---|---|---|---|
OSPF | 50 | 2 | 25 |
EIGRP | 50 | 2 | 25 |
RIP | 30 | 2 | 15 |
Scaling Layer 2 Backbones
Typically, layer 2 switches are in the remote closets and represent the access layer, the layer where users gain access to the internetwork. Ethernet switched networks scale well in this environment, where the layer 2 switches then connect into a larger, more robust layer 3 switch representing the distribution layer. The layer 3 device is then connected into a layer 2 device representing the core. Because routing is not necessarily recommended in a classic design model at the core, the model then looks like this:
Access | Distribution | Core |
---|---|---|
Layer 2 switch | Layer 3 switch | Layer 2 switch |
Spanning Tree Protocol (STP)
Chapter 4, 'Layer 2 Switching and the Spanning Tree Protocol (STP),' and Chapter 5, 'Using Spanning Tree with VLANs,' detail the STP, but some discussion is necessary here. STP is used by layer 2 bridges to stop network loops in networks that have more than one physical link to the same network. There is a limit to the number of links in a layer 2 switched backbone that needs to be taken into account. As you increase the number of core switches, the problem becomes that the number of links to distribution links must increase also, for redundancy reasons. If the core is running the Spanning Tree Protocol, then it can compromise the high-performance connectivity between switch blocks. The best design on the core is to have two switches without STP running. You can do this only by having a core without links between the core switches. This is demonstrated in Figure 1.11.
Figure 1.11 shows redundancy between the core and distribution layer without spanning tree loops. This is accomplished by not having the two core switches linked together. However, each distribution layer 3 switch has a connection to each core switch. This means that each layer 3 switch has two equal-cost paths to every other router in the campus network.
Scaling Layer 3 Backbones
As discussed in the previous section, 'Scaling Layer 2 Backbones,' you'll typically find layer 2 switches connecting to layer 3 switches, which connect to the core with the layer 2 switches. However, it is possible that some networks might have layer 2/layer 3/layer 3 designs (layer 2 connecting to layer 3 connecting to layer 3). But this is not cheap, even if you're using someone else's money. There is always some type of network budget, and you need to have good reason to spend the type of money needed to build layer 3 switches into the core.
There are three reasons you would implement layer 3 switches into the core:
-
Fast convergence
-
Automatic load balancing
-
Elimination of peering problems
Fast Convergence
If you have only layer 2 devices at the core layer, the STP will be used to stop network loops if there is more than one connection between core devices. The STP has a convergence time of more than 50 seconds, and if the network is large, this can cause an enormous number of problems if it has just one link failure.
STP is not implemented in the core if you have layer 3 devices. Routing protocols, which can have a much faster convergence time than STP, are used to maintain the network.
Automatic Load-Balancing
If you provide layer 3 devices in the core, the routing protocols can load-balance with multiple equal-cost links. This is not possible with layer 3 devices only at the distribution layer, because you would have to selectively choose the root for utilizing more than one path.
Elimination of Peering Problems
Because routing is typically performed in the distribution layer devices, each distribution layer device must have 'reachability' information about each of the other distribution layer devices. These layer 3 devices use routing protocols to maintain the state and reachability information about neighbor routers. This means that each distribution device becomes a peer with every other distribution layer device, and scalability becomes an issue because every device has to keep information for every other device.
If your layer 3 devices are located in the core, you can create a hierarchy, and the distribution layer devices will no longer be peers to each other's distribution device. This is typical in an environment in which there are more than 100 switch blocks.